How can Oracle Forms (or other legacy) applications be modernized to fit in a contemporary IT architecture? Trends, concepts and technologies are discussed.
Scylla Summit 2022: Migrating SQL Schemas for ScyllaDB: Data Modeling Best Pr...ScyllaDB
To maximize the benefits of ScyllaDB, you must adapt the structure of your data. Data modeling for ScyllaDB should be query-driven based on your access patterns – a very different approach than normalization for SQL tables. In this session, you will learn how tools can help you migrate your existing SQL structures to accelerate your digital transformation and application modernization.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Introducing Oracle Fusion Middleware 12.1.3 and especially SOA Suite and BPM ...Lucas Jellema
Overview of Oracle FMW release 12.1.3 in general and about SOA Suite and BPM Suite 12c in particular. Highlights important new features and cross product themes (such as productivity, industrialization, ease of getting started and more). Some topics: Service Bus Pipeline, Native Format transformation, XQuery support, BAM new style, Key Performance and Risk Indicators,...
Take advantage of ScyllaDB’s wide column NoSQL features such as workload prioritization to balance the needs of OLTP and OLAP in the same cluster. Plus learn about the different compaction strategies and which one would be right for your workload. With additional insights on properly sizing your database and using open source tools for observability.
Scylla Summit 2022: Migrating SQL Schemas for ScyllaDB: Data Modeling Best Pr...ScyllaDB
To maximize the benefits of ScyllaDB, you must adapt the structure of your data. Data modeling for ScyllaDB should be query-driven based on your access patterns – a very different approach than normalization for SQL tables. In this session, you will learn how tools can help you migrate your existing SQL structures to accelerate your digital transformation and application modernization.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
Introducing Oracle Fusion Middleware 12.1.3 and especially SOA Suite and BPM ...Lucas Jellema
Overview of Oracle FMW release 12.1.3 in general and about SOA Suite and BPM Suite 12c in particular. Highlights important new features and cross product themes (such as productivity, industrialization, ease of getting started and more). Some topics: Service Bus Pipeline, Native Format transformation, XQuery support, BAM new style, Key Performance and Risk Indicators,...
Take advantage of ScyllaDB’s wide column NoSQL features such as workload prioritization to balance the needs of OLTP and OLAP in the same cluster. Plus learn about the different compaction strategies and which one would be right for your workload. With additional insights on properly sizing your database and using open source tools for observability.
Building Event Streaming Architectures on Scylla and KafkaScyllaDB
Event streaming architectures require high-throughput, low-latency components to consistently and smoothly transfer data between heterogenous transactional and analytical systems. Join us and Confluent's Tim Berglund to learn how the Scylla and Confluent Kafka interoperate as a foundation upon which you can build enterprise-grade, event-driven applications, plus a use case from Numberly.
Growing the Delta Ecosystem to Rust and Python with Delta-RSDatabricks
In this session we will introduce the delta-rs project which is helping bring the power of Delta Lake outside of the Spark ecosystem. By providing a foundational Delta Lake library in Rust, delta-rs can enable native bindings in Python, Ruby, Golang, and more.We will review what functionality delta-rs supports in its current Rust and Python APIs and the upcoming roadmap.
We will also give an overview of one of the first projects to use it in production: kafka-delta-ingest, which builds on delta-rs to provide a high throughput service to bring data from Kafka into Delta Lake.
Building Reliable Lakehouses with Apache Flink and Delta LakeFlink Forward
Flink Forward San Francisco 2022.
Apache Flink and Delta Lake together allow you to build the foundation for your data lakehouses by ensuring the reliability of your concurrent streams from processing to the underlying cloud object-store. Together, the Flink/Delta Connector enables you to store data in Delta tables such that you harness Delta’s reliability by providing ACID transactions and scalability while maintaining Flink’s end-to-end exactly-once processing. This ensures that the data from Flink is written to Delta Tables in an idempotent manner such that even if the Flink pipeline is restarted from its checkpoint information, the pipeline will guarantee no data is lost or duplicated thus preserving the exactly-once semantics of Flink.
by
Scott Sandre & Denny Lee
A 30 day plan to start ending your data struggle with SnowflakeSnowflake Computing
Organizations everywhere are struggling to load, integrate, analyze and collaborate with data. This is largely thanks to their antiquated data platform, designed in a time when few people had the desire or need to interact with the database. Snowflake, the data warehouse built for the cloud, can help.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Apache Ranger’s pluggable architecture allows centralized authoring of authorization policies and access audits—for Hadoop and non-Hadoop components. Authorization policy model is designed to capture and express complex authorization needs of component.
In this session, we will present two more key enhancements made to the policy model in the next release to make it richer and support advanced authorization needs of contemporary enterprise security infrastructure.
•Ranger service definition is enhanced to support specification of allowed accesses on a given resource. This specification is then utilized to present only valid accesses when authoring policy targeted for the resource.
•Ranger policy model is enhanced to support time-based policy that temporarily grants/denies access to a resource during specified time window. The time specification supports specification of a time zone which is enforced based on the time zone of the component where the Ranger plugin runs.
We will conclude by a demonstration of these new capabilities. ABHAY KULKARNI, Engineer, Hortonworks and RAMESH MANI, Staff Software Engineer, Hortonworks
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...HostedbyConfluent
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Ethan Guo | Current 2022
Back in 2016, Apache Hudi brought transactions, change capture on top of data lakes, what is today referred to as the Lakehouse architecture. In this session, we first introduce Apache Hudi and the key technology gaps it fills in the modern data architecture. Bridging traditional data lakes and warehouses, Hudi helps realize the Lakehouse vision, by bringing transactions, optimized table metadata to data lakes and powerful storage layout optimizations, moving them closer to cloud warehouses of today. Viewed from a data engineering lens, Hudi also plays a key unifying role between the batch and stream processing worlds, by acting as a columnar, server-less ""state store"" for batch jobs, ushering in what we call the incremental processing model, where batch jobs can consume new data, update/delete intermediate results in a Hudi table, instead of re-computing/re-write entire output like old-school big batch jobs.
Rest of talk focusses on a deep dive into the some of the time-tested design choices and tradeoffs in Hudi, that helps power some of the largest transactional data lakes on the planet today. We will start by describing a tour of the storage format design, including data, metadata layouts and of course Hudi's timeline, an event log that is central to implementing ACID transactions and concurrency control. We will delve deeper into the practical concurrency control pitfalls in data lakes, and show how Hudi's hybrid approach combining MVCC with optimistic concurrency control, lowers contention and unlocks minute-level near real-time commits to Hudi tables. We will conclude with code examples that showcase Hudi's rich set of table services that perform vital table management such as cleaning older file versions, compaction of delta logs into base files, dynamic re-clustering for faster query performance, or the more recently introduced indexing service that maintains Hudi's multi-modal indexing capabilities.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
Oracle Active Data Guard: Best Practices and New Features Deep Dive Glen Hawkins
Oracle Data Guard and Oracle Active Data Guard have long been the answer for the real-time protection, availability, and usability of Oracle data. This presentation provides an in-depth look at several key new features that will make your life easier and protect your data in new and more flexible ways. Learn how Oracle Active Data Guard 19c has been integrated with Oracle Database In-Memory and offers a faster application response after a role transition. See how DML can now be redirected from an Oracle Active Data Guard standby to its primary for more flexible data protection in today’s data centers or your data clouds. This technical deep dive on Active Data Guard is designed to give you a glimpse into upcoming new features brought to you by Oracle Development.
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
Building Event Streaming Architectures on Scylla and KafkaScyllaDB
Event streaming architectures require high-throughput, low-latency components to consistently and smoothly transfer data between heterogenous transactional and analytical systems. Join us and Confluent's Tim Berglund to learn how the Scylla and Confluent Kafka interoperate as a foundation upon which you can build enterprise-grade, event-driven applications, plus a use case from Numberly.
Growing the Delta Ecosystem to Rust and Python with Delta-RSDatabricks
In this session we will introduce the delta-rs project which is helping bring the power of Delta Lake outside of the Spark ecosystem. By providing a foundational Delta Lake library in Rust, delta-rs can enable native bindings in Python, Ruby, Golang, and more.We will review what functionality delta-rs supports in its current Rust and Python APIs and the upcoming roadmap.
We will also give an overview of one of the first projects to use it in production: kafka-delta-ingest, which builds on delta-rs to provide a high throughput service to bring data from Kafka into Delta Lake.
Building Reliable Lakehouses with Apache Flink and Delta LakeFlink Forward
Flink Forward San Francisco 2022.
Apache Flink and Delta Lake together allow you to build the foundation for your data lakehouses by ensuring the reliability of your concurrent streams from processing to the underlying cloud object-store. Together, the Flink/Delta Connector enables you to store data in Delta tables such that you harness Delta’s reliability by providing ACID transactions and scalability while maintaining Flink’s end-to-end exactly-once processing. This ensures that the data from Flink is written to Delta Tables in an idempotent manner such that even if the Flink pipeline is restarted from its checkpoint information, the pipeline will guarantee no data is lost or duplicated thus preserving the exactly-once semantics of Flink.
by
Scott Sandre & Denny Lee
A 30 day plan to start ending your data struggle with SnowflakeSnowflake Computing
Organizations everywhere are struggling to load, integrate, analyze and collaborate with data. This is largely thanks to their antiquated data platform, designed in a time when few people had the desire or need to interact with the database. Snowflake, the data warehouse built for the cloud, can help.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Apache Ranger’s pluggable architecture allows centralized authoring of authorization policies and access audits—for Hadoop and non-Hadoop components. Authorization policy model is designed to capture and express complex authorization needs of component.
In this session, we will present two more key enhancements made to the policy model in the next release to make it richer and support advanced authorization needs of contemporary enterprise security infrastructure.
•Ranger service definition is enhanced to support specification of allowed accesses on a given resource. This specification is then utilized to present only valid accesses when authoring policy targeted for the resource.
•Ranger policy model is enhanced to support time-based policy that temporarily grants/denies access to a resource during specified time window. The time specification supports specification of a time zone which is enforced based on the time zone of the component where the Ranger plugin runs.
We will conclude by a demonstration of these new capabilities. ABHAY KULKARNI, Engineer, Hortonworks and RAMESH MANI, Staff Software Engineer, Hortonworks
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...HostedbyConfluent
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Ethan Guo | Current 2022
Back in 2016, Apache Hudi brought transactions, change capture on top of data lakes, what is today referred to as the Lakehouse architecture. In this session, we first introduce Apache Hudi and the key technology gaps it fills in the modern data architecture. Bridging traditional data lakes and warehouses, Hudi helps realize the Lakehouse vision, by bringing transactions, optimized table metadata to data lakes and powerful storage layout optimizations, moving them closer to cloud warehouses of today. Viewed from a data engineering lens, Hudi also plays a key unifying role between the batch and stream processing worlds, by acting as a columnar, server-less ""state store"" for batch jobs, ushering in what we call the incremental processing model, where batch jobs can consume new data, update/delete intermediate results in a Hudi table, instead of re-computing/re-write entire output like old-school big batch jobs.
Rest of talk focusses on a deep dive into the some of the time-tested design choices and tradeoffs in Hudi, that helps power some of the largest transactional data lakes on the planet today. We will start by describing a tour of the storage format design, including data, metadata layouts and of course Hudi's timeline, an event log that is central to implementing ACID transactions and concurrency control. We will delve deeper into the practical concurrency control pitfalls in data lakes, and show how Hudi's hybrid approach combining MVCC with optimistic concurrency control, lowers contention and unlocks minute-level near real-time commits to Hudi tables. We will conclude with code examples that showcase Hudi's rich set of table services that perform vital table management such as cleaning older file versions, compaction of delta logs into base files, dynamic re-clustering for faster query performance, or the more recently introduced indexing service that maintains Hudi's multi-modal indexing capabilities.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
Oracle Active Data Guard: Best Practices and New Features Deep Dive Glen Hawkins
Oracle Data Guard and Oracle Active Data Guard have long been the answer for the real-time protection, availability, and usability of Oracle data. This presentation provides an in-depth look at several key new features that will make your life easier and protect your data in new and more flexible ways. Learn how Oracle Active Data Guard 19c has been integrated with Oracle Database In-Memory and offers a faster application response after a role transition. See how DML can now be redirected from an Oracle Active Data Guard standby to its primary for more flexible data protection in today’s data centers or your data clouds. This technical deep dive on Active Data Guard is designed to give you a glimpse into upcoming new features brought to you by Oracle Development.
Building a Streaming Microservice Architecture: with Apache Spark Structured ...Databricks
As we continue to push the boundaries of what is possible with respect to pipeline throughput and data serving tiers, new methodologies and techniques continue to emerge to handle larger and larger workloads
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
2. About the Author
● Technical consultant
• at Oracle Consulting Germany
• self employed since 2008
● Expert for
• Oracle Database
• Fusion Middleware, ADF, eBusiness Suite
• Oracle Forms modernization and migration
• Oracle Reports and BI Publisher
3. Agenda
• Stating the challenge
• Migration roadmap
• Analyze existing architecture
• Forms modernization
• Forms alternatives – (not so) new technologies
• Design service and presentation layer
• Target Architecture Scenario
4. Common Questions
• For how long will I be able to operate my Forms applications?
• Commitment by Oracle indefinite?
• Forms developers retire
• One main competitor runs his software at lower cost and reacts
very agile on new business requirements. How can we get there?
• Our field sales representatives need some of our application data
on their mobile devices.
• Controlling is not happy with the old fashioned and static
presentation of the revenue numbers.
The following slides are a starting point to find adequate answers.
5. Assumption
• IT of the last decades was highly focused on manual data entry
Reason: Humans were primary data source and recipient
• New technologies have gained importance:
• Mobile devices
• Internet of things
• Deep learning
• RFID and others
• Consequence is evolving human interaction (‘human tasks’)
• Classic desktop applications will loose importance
• Interaction of multiple services becomes more relevant
6. Evolution of Human Interfaces
• Mass data query and
entry screens
• Desktop oriented
• Static forms
• User driven
• Data evaluation, data mining
• Decision support
• Interactive dynamic dashboards
• Desktop and mobile
• Event driven, real time
8. Transition Roadmap
Analyze Existing Architecture
Examine Target Technologies
Define Modernization Scope
Design the Services
Design the Presentation Layer
9. Analyze Existing Architecture
1. Locate the business logic:
Database, Forms modules, Java programs
2. Identify unused and deprecated components:
Dead code, obsolete Forms modules and database objects
3. Identify components
• to be exposed as a service
• to be replaced by an external service
4. Examine peripheral/supporting components:
Interfaces, batch jobs, reporting, security
10. Transition Roadmap
Analyze Existing Architecture
Examine Target Technologies
Define Modernization Scope
Design the Services
Design the Presentation Layer
11. Examine Target Technologies
Full Stack (Oracle)
• Forms (modernization)
• ADF (ADF Faces with Task Flows and ADF Business Components)
• Application Express (APEX)
Services
• Oracle REST Data Services (ORDS)
• Service based on Java frameworks (Spring REST, …)
• SOA Suite
User Interface
• JavaScript / Typescript based frameworks
• JSF
12. Forms Modernization – Chances and Risks
✅ Currently: Oracle commits to
forms for ‘indefinite” time
✅ Saves the investments taken
✅ Skill set for maintenance is
available
✅ Still a fast and reliable way to
enter mass transactional data
on a desktop
✅ Forms 12c features and third
party tool kits enable
modernization
❗ Forms runtime client can get
incompatible with desktop
Java Runtime (JRE)
❗ Major browser vendors had
discontinued Java applets
❗ Running Java applications on
desktop computers is more
and more considered as a
security risk and causes
unwanted maintenance by IT
desktop support
13. Partial Migration – Pros and Cons
✅ Smaller project(s), agile
approach possible
✅ Tailored applications for each
target audience:
• Responsive dashboards for
controlling and management
• Mobile device apps for field
service and sales reps
• Fast and reliable data entry forms
for back office desktops
✅ Gradual migration to a modern
cloud based architecture
❌ Increased effort in operations
and maintenance possible
❌ Diverse technology stack
❌ Manual switching between
multiple applications can
degrade user experience
A complete Migration from Forms
creates a huge project and may
not generate the desired results.
Let’s consider doing it partially
and gradually.
14. Tool Supported Migration
• Differences in legacy and target architectures are huge,
not limited to the programming languages
• Legacy code usually grew over years collecting many add-ons and
workarounds, has obsolete portions1), etc.
• Such a real world legacy application automatically transformed,
will it be
- Lean, contain maintainable and performant code?
- Future ready for cloud oriented architectures?
1) There are software tools that can identify obsolete (unused) code,
even unused portions of the UI
15. ADF – Faces, Task Flows
and Business Components
✅ Advanced and powerful features
(Task flows, ADF Business
Components, ...)
✅ Moves focus of development
from technology to business
requirements
✅ Follows industry standards like JEE
and JSF
✅ Support by large software vendor
❌ Partially proprietary
(Binding layer, ADF BC,...)
❌ JSF Technology stack aged
❌ Heavy weight,
performance issues possible
❌ License costs
❌ Lack of proficient ADF developers
❓ Commitment by Oracle?
When the learning curve has been consumed,
building Applications with ADF can be very efficient.
But this comes with a price tag and the future of the
framework is uncertain.
16. JavaScript based
� Is JSF still relevant? Read this opinion: https://www.javabullets.com/is-jsf-still-relevant
❗ The JavaScript framework scene
is fast moving
Considerations
❓ Learning curve
❓ Creates maintainable code
even for large projects
❓ Component library contained
or available
❓ Performance
❓ (Projected) Market share
❓ Support and Community
17. Application Express (APEX)
• Part of the Oracle Database
• Highly declarative web application development
Pros:
✅ Supports agile development
✅ Delivers fast results, contains many standard patterns
(menus, forms, master/detail data views, security)
✅ No additional license costs on top of Oracle database license
Cons:
❌ Highly proprietary
❌ Strictly dependent on Oracle Database (vendor lock)
❌ Lack of scalability and layer/API based architectural approach
❌ Security considerations
It is amazing, how quite powerful
applications can be built with APEX in
between short time even with small
knowledge of coding. But finally
everything comes with a price.
18. Transition Roadmap
Analyze Existing Architecture
Examine Target Technologies
Define Modernization Scope
Design the Services
Design the Presentation Layer
19. Define Modernization Scope
● Identify screens and modules required
for migration
• Functionality not available in
Forms (i.e. for mobile devices)
• Strategic
• Other
● Identify related modules building a
business process
• Technically by code references
• Logically by user navigation
(navigation flow)
● Migrate identified modules to
target architecture
● Build supporting new interfaces and
workflows based on requirements
New
interfaces and
workflows
20. Transition Roadmap
Analyze Existing Architecture
Examine Target Technologies
Define Modernization Scope
Design the Services
Design the Presentation Layer
21. Design the Services
Service Categories
• Administrative (authentication, authorization, …)
• CRUD operations – data retrieval, entry and change
• Validation
• Invoke other systems as services (reporting, interfaces, …)
Implementation
• PL/SQL procedure
• ADF Business Component
• Web Service (can be based on PL/SQL, ADF BC, Java)
A unified service layer would be ideal, but not all
frontends will be compatible. Forms or JSF can
be bound to REST only with high effort, a vendor
interface may accept a specific XML format ... So
the services will remain a mix of different
technologies based on the consumer’s
requirements.
22. Extracting Business Logic
• Extract PL/SQL from Forms into database stored
procedures if they act like services.
• Criteria for an ideal extracted stored procedure:
- Can its purpose be described like a use case?
Examples: 'Get contract details' 'Update stock amount' 'Create new warehouse’
- Does it represent or encapsulate a database transaction, or is it strictly read only?
- Is it reusable across screens or applications?
- Can given input and expected output be documented across scenarios?
- Can it be (automatically) tested with a set of defined test cases?
• Most questions answered with
- Yes? Great candidate for a database driven service!
- No? Consider refactoring with extraction. Why should you keep and maintain
components in your architecture when you don't know exactly what they do?
There are tools capable of automatic
Forms PL/SQL extraction to the database.
They can speed up extraction but have to
be used with caution as they will do it
one-to-one.
24. Thinking about Transactions (2)
● The Forms database centric architecture uses
transactions and guarantees data consistency out of
the box
● ADF Business components can emulate the
transaction based approach
● The standard REST approach maps http methods
(GET, PUT, …) one on one to CRUD/DML database
operations (UPDATE …)
● Scenarios where a use case contains more than one
CRUD/DML operation in one transaction need a more
customized service implementation,
● Alternative Approach: Abandon full data consistency
25. Authentication and Authorization
Legacy
• Named database users and assigned database roles
are common in Forms applications
Modernization Strategies
• Make database users and roles available to services
• Migrate users and roles to a new access control service
• PL/SQL Code with references needs rework if it contains references to
USER or database roles or
• Consider connection via named proxy user
26. Transition Roadmap
Analyze Existing Architecture
Examine Target Technologies
Define Modernization Scope
Design the Services
Design the Presentation Layer
27. Design the Presentation Layer
• Design for Desktop and/or Mobile devices
• Unified or separate?
• UI frame organization:
• Menu
• Tabs
• Or: New approach optimized for mobile devices
• Navigation and navigation rules between views
• View design
28. Reporting
• Replace deprecated Oracle Reports
• BI Publisher, Jasper Reports, …
• Check for obsolete reports
• Align with current business requirements
• Reports still required as static paper/PDF output?
• Consider alternative and contemporary ways of presentation
i.e. interactive dashboards
• Design integration with target architecture
• If static (PDF) output is still required:
• Asynchronous generation – notify recipient of result like success or failure
• Delivery method to the recipient
Reporting is easily underestimated in
implementation and modernization projects. Quite
surprising as the efforts can be significant. This
slide is just a starting point.
29. Target Architecture Scenario
● Choose (or keep) the technologies
that meet your needs
● No new developments in
technologies you mark as
deprecated for your organization
● Web Services (REST) are the glue for
modern architectures
● Gradually evolve to a
Cloud / PaaS / SaaS ready
architecture
30. Related Topics Down The Road
• Forms modernization
• Oracle Application Express (APEX)
• Interaction between Oracle Forms
and web applications
• Security
31. Summary
• Forms (12c) applications can comfortably exist
in a current IT landscape.
• A gradual migration is possible and recommended.
• This is just the beginning.
Continuous learning is part of the game.
Please share your opinion and experience!
mail@kaimoeller.net LinkedIn: Kai-Uwe Möller