Financial companies need Java EE to power its business today. Rakuten Card, one of the largest credit card companies in Japan, adopted Java EE 7 for its credit card core systems rearchitecture, from one of the oldest COBOL based mainframe in Japan. Additionally, we chose Apache Spark for super rapid batch execution platform. We completed this big core system migration project successfully.
You can learn why we choose Java EE, and Apache Spark for super rapid batch execution, and our experiences and lessons we learned. How to start such a big project? Why we choose it, how we ported, how use Apache Spark for performance improvements, and launched with? We’ll answer these questions and any that you may have.
Case Study: Credit Card Core System with Exalogic, Exadata, Oracle Cloud Mach...Hirofumi Iwasaki
For increasing business opportunity, the Financial industry companies requires the power, flexibility and scalability of latest enterprise technologies for its 24/7 services. Rakuten Card, one of the largest credit card companies in Japan, recently renewed their credit card core processing systems utilizing with Java EE. Among the myriad of available technologies, why did we choose Exalogic and Exadata, with Apache Spark distributed configuration? How did we ported from one of the oldest COBOL based mainframe in Japan? What were the key of the success factors into launching and operating this mission critical service? This session unveils our great results, and how our selections are effective for financial enterprise systems.
Deep Learning for Java Developer - Getting StartedSuyash Joshi
This presentation was delivered on April 14, 2020 to the San Francisco Java User Group (SF JUG) over Zoom. Over half of the time was spent on Live Coding and Demo of ML Apps using TF-Java & DJL Frameworks.
Database@Home : Data Driven Apps : Core-dev or Low Code UITammy Bednar
There’s more than one approach to creating apps these days – knowing the options and how to choose one is critical. Low-code frameworks take a top-down approach, which can reduce complexity and development time significantly. On the other hand, core-dev frameworks are a better choice when control over every aspect of an app is essential. In this session, attendees will be introduced to a low-code framework (APEX) and a core-dev one (JET) to see how the approaches and results differ.
Case Study: Credit Card Core System with Exalogic, Exadata, Oracle Cloud Mach...Hirofumi Iwasaki
For increasing business opportunity, the Financial industry companies requires the power, flexibility and scalability of latest enterprise technologies for its 24/7 services. Rakuten Card, one of the largest credit card companies in Japan, recently renewed their credit card core processing systems utilizing with Java EE. Among the myriad of available technologies, why did we choose Exalogic and Exadata, with Apache Spark distributed configuration? How did we ported from one of the oldest COBOL based mainframe in Japan? What were the key of the success factors into launching and operating this mission critical service? This session unveils our great results, and how our selections are effective for financial enterprise systems.
Deep Learning for Java Developer - Getting StartedSuyash Joshi
This presentation was delivered on April 14, 2020 to the San Francisco Java User Group (SF JUG) over Zoom. Over half of the time was spent on Live Coding and Demo of ML Apps using TF-Java & DJL Frameworks.
Database@Home : Data Driven Apps : Core-dev or Low Code UITammy Bednar
There’s more than one approach to creating apps these days – knowing the options and how to choose one is critical. Low-code frameworks take a top-down approach, which can reduce complexity and development time significantly. On the other hand, core-dev frameworks are a better choice when control over every aspect of an app is essential. In this session, attendees will be introduced to a low-code framework (APEX) and a core-dev one (JET) to see how the approaches and results differ.
In this talk, you'll learn about the new features in JDK 11, the first long-term support (LTS) release in a new, faster Java SE release cadence.
We'll discuss the how these features benefit your code, and how existing code can be brought forward to benefit from JDK 11. Last but not least, we'll discuss how to keep up with innovations coming up in JDK 12, and future releases.
Database@Home : The Future is Data DrivenTammy Bednar
These slides were presented during the Database@Home : Data-Driven Apps event. This session will discuss the importance of data to an organisation and the need to build applications where the value within that data can easily be exploited. To achieve that aim we need to start building applications that benefit from the flexibility of new development paradigms but don't create artificial barriers of complexity that stop us from easily responding to change within our organisations.
Oracle RAC 19c - the Basis for the Autonomous DatabaseMarkus Michalewicz
Oracle Real Application Clusters (RAC) has been Oracle's premier database availability and scalability solution for more than two decades as it provides near linear horizontal scalability without the need to change the application code. This session explains why Oracle RAC 19c is the basis for Oracle's Autonomous Database by introducing some of its latest features, some of which were specifically designed for ATP-D, as well as by taking a peek under the hood of the dedicated Autonomous Database Service (ATP-D).
The latest JDK 12 release cycle and the alter support model will exact quicker of previous version and latest features on a regular basis. In great combination with the evolution of previous frameworks.
MIGRATION OF AN OLTP SYSTEM FROM ORACLE TO MYSQL AND COMPARATIVE PERFORMANCE ...cscpconf
Across the various RDBMS vendors Oracle has more than 60% [6] of market share, with a
complete feature-rich and secure offering. This has made Oracle as default choice as the
database choice for systems of all sizes.
There many open source databases as MySQL, PostgreS, etc. which has now evolved into
complete feature rich offerings and come with zero-licensing fee. This makes it an attractive
proposition to migrate from Oracle to an open-source distribution, to cut-down on licensing
costs.
Migrating an application from a commercial vendor to open source is based on typical
concerns of functionality and performabilty. Though there are various tools and offerings
available to migrate but currently there exists no reference points for the exact effort and impact of migration on the application. Thus we did a study of impact analysis and effort involved in migrating on OLTP application. We successfully migrated the application and did a performance comparison, which is covered in the paper. The paper also covers the tool and methodology used, along with the limitations of MySQL and presents learnings of the entire exercise.
Azul CTO Gil Tene describes the changes being made to the Java Virtual Machine (JVM) for Java 8. Learn how these changes could affect your applications and development teams.
A Java Implementer's Guide to Better Apache Spark PerformanceTim Ellison
Apache Spark has rocked the big data landscape, becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some of our unique Java 8 capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will demonstrate how solutions, previously infeasible with regular Java programming, become possible with our high performance Spark core runtime, enabling you to solve problems smarter and faster.
Presented at Jfokus Feb 2016
JSR 236 Concurrency Utils for EE presentation for JavaOne 2013 (CON7948)Fred Rowe
Presentation about the newly released JSR236 spec that Anthony Lai (Oracle) and Fred Rowe (IBM) did for session CON7948 at JavaOne SF 2013.
JSR 236 is part of EE7 platform and defines extensions to the SE concurrency APIs to allow them to be used in an app server environment.
A couple of major players in the internet space, in particular Amazon, LinkedIn and Google, opened the eyes of the corporate world to the coming onslaught of a NoSQL workload. As with every new market opportunity, some young guns quickly jumped in to capitalize on the need and confusion, but things are starting to settle and NoSQL is maturing as Enterprise ready solutions break away with long sought after features. In this webcast, learn about NoSQL convergence from Oracle, the leader in data management and hear why some flavors of NoSQL are here to stay.
In this presentation, we (Jonatan and Marco) investigated the new official and hidden features of Java 12.
We collected code examples and stories behind this release. We were happy about some features and disappointed with others.
We hope that with these slides you can learn quickly and with fun what's coming with the new version of Java.
"What does it take to transform a legacy mainframe COBOL system to state-of-the-art Java EE platform? How the Apache Spark clustering framework fits in all of this? Attend this session to find out, with concrete solutions to some of the major problems of turning a procedural program object-oriented, and parallelizing sequential processing."
Java ee7 with apache spark for the world's largest credit card core systems, ...Rakuten Group, Inc.
Financial industry companies need Java EE to power for its business today. Rakuten Card, one of the largest credit card companies in Japan, adopted Java EE 7 for its credit card core systems architecture, from one of the oldest COBOL based mainframe in Japan. Additionally, we chose Apache Spark for super rapid batch execution platform. We completed this big core system migration project successfully.
You can learn why we choose Java EE, and Apache Spark for super rapid batch execution, and our experiences and lessons we learned. How to start such a the big project? Why we choose it, how we ported, how use Apache Spark for performance improvements, and launched with? We’ll answer these questions and any that you may have.
Additionally, we are going to unveil our future roadmap for expanding our systems as well, with the cutting edge technology and standards.
In this talk, you'll learn about the new features in JDK 11, the first long-term support (LTS) release in a new, faster Java SE release cadence.
We'll discuss the how these features benefit your code, and how existing code can be brought forward to benefit from JDK 11. Last but not least, we'll discuss how to keep up with innovations coming up in JDK 12, and future releases.
Database@Home : The Future is Data DrivenTammy Bednar
These slides were presented during the Database@Home : Data-Driven Apps event. This session will discuss the importance of data to an organisation and the need to build applications where the value within that data can easily be exploited. To achieve that aim we need to start building applications that benefit from the flexibility of new development paradigms but don't create artificial barriers of complexity that stop us from easily responding to change within our organisations.
Oracle RAC 19c - the Basis for the Autonomous DatabaseMarkus Michalewicz
Oracle Real Application Clusters (RAC) has been Oracle's premier database availability and scalability solution for more than two decades as it provides near linear horizontal scalability without the need to change the application code. This session explains why Oracle RAC 19c is the basis for Oracle's Autonomous Database by introducing some of its latest features, some of which were specifically designed for ATP-D, as well as by taking a peek under the hood of the dedicated Autonomous Database Service (ATP-D).
The latest JDK 12 release cycle and the alter support model will exact quicker of previous version and latest features on a regular basis. In great combination with the evolution of previous frameworks.
MIGRATION OF AN OLTP SYSTEM FROM ORACLE TO MYSQL AND COMPARATIVE PERFORMANCE ...cscpconf
Across the various RDBMS vendors Oracle has more than 60% [6] of market share, with a
complete feature-rich and secure offering. This has made Oracle as default choice as the
database choice for systems of all sizes.
There many open source databases as MySQL, PostgreS, etc. which has now evolved into
complete feature rich offerings and come with zero-licensing fee. This makes it an attractive
proposition to migrate from Oracle to an open-source distribution, to cut-down on licensing
costs.
Migrating an application from a commercial vendor to open source is based on typical
concerns of functionality and performabilty. Though there are various tools and offerings
available to migrate but currently there exists no reference points for the exact effort and impact of migration on the application. Thus we did a study of impact analysis and effort involved in migrating on OLTP application. We successfully migrated the application and did a performance comparison, which is covered in the paper. The paper also covers the tool and methodology used, along with the limitations of MySQL and presents learnings of the entire exercise.
Azul CTO Gil Tene describes the changes being made to the Java Virtual Machine (JVM) for Java 8. Learn how these changes could affect your applications and development teams.
A Java Implementer's Guide to Better Apache Spark PerformanceTim Ellison
Apache Spark has rocked the big data landscape, becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some of our unique Java 8 capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will demonstrate how solutions, previously infeasible with regular Java programming, become possible with our high performance Spark core runtime, enabling you to solve problems smarter and faster.
Presented at Jfokus Feb 2016
JSR 236 Concurrency Utils for EE presentation for JavaOne 2013 (CON7948)Fred Rowe
Presentation about the newly released JSR236 spec that Anthony Lai (Oracle) and Fred Rowe (IBM) did for session CON7948 at JavaOne SF 2013.
JSR 236 is part of EE7 platform and defines extensions to the SE concurrency APIs to allow them to be used in an app server environment.
A couple of major players in the internet space, in particular Amazon, LinkedIn and Google, opened the eyes of the corporate world to the coming onslaught of a NoSQL workload. As with every new market opportunity, some young guns quickly jumped in to capitalize on the need and confusion, but things are starting to settle and NoSQL is maturing as Enterprise ready solutions break away with long sought after features. In this webcast, learn about NoSQL convergence from Oracle, the leader in data management and hear why some flavors of NoSQL are here to stay.
In this presentation, we (Jonatan and Marco) investigated the new official and hidden features of Java 12.
We collected code examples and stories behind this release. We were happy about some features and disappointed with others.
We hope that with these slides you can learn quickly and with fun what's coming with the new version of Java.
"What does it take to transform a legacy mainframe COBOL system to state-of-the-art Java EE platform? How the Apache Spark clustering framework fits in all of this? Attend this session to find out, with concrete solutions to some of the major problems of turning a procedural program object-oriented, and parallelizing sequential processing."
Java ee7 with apache spark for the world's largest credit card core systems, ...Rakuten Group, Inc.
Financial industry companies need Java EE to power for its business today. Rakuten Card, one of the largest credit card companies in Japan, adopted Java EE 7 for its credit card core systems architecture, from one of the oldest COBOL based mainframe in Japan. Additionally, we chose Apache Spark for super rapid batch execution platform. We completed this big core system migration project successfully.
You can learn why we choose Java EE, and Apache Spark for super rapid batch execution, and our experiences and lessons we learned. How to start such a the big project? Why we choose it, how we ported, how use Apache Spark for performance improvements, and launched with? We’ll answer these questions and any that you may have.
Additionally, we are going to unveil our future roadmap for expanding our systems as well, with the cutting edge technology and standards.
Voldemort & Hadoop @ Linkedin, Hadoop User Group Jan 2010Bhupesh Bansal
Jan 22nd, 2010 Hadoop meetup presentation on project voldemort and how it plays well with Hadoop at linkedin. The talk focus on Linkedin Hadoop ecosystem. How linkedin manage complex workflows, data ETL , data storage and online serving of 100GB to TB of data.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
Las nuevas arquitecturas, servicios y micro-servicios web, aplicaciones y apps, Bots, IoT, AI, etc., que demandan las organizaciones, necesitan cada vez más del talento y experiencia de los Administradores de Bases de Datos para dar consejos, sugerencias y respuestas que aporten un valor diferencial a los grupos de desarrollo y usuarios de negocio.
Te mostramos las claves del nuevo rol del DBA, que complementa la “A” de Administrar con: Analizar, Aconsejar, Automatizar y crear Arquitecturas eficientes y Autónomas para la gestión Avanzada de datos, colaborando con los desarrolladores y usuarios desde un conocimiento profundo de las base de datos.
Building and deploying LLM applications with Apache AirflowKaxil Naik
Behind the growing interest in Generate AI and LLM-based enterprise applications lies an expanded set of requirements for data integrations and ML orchestration. Enterprises want to use proprietary data to power LLM-based applications that create new business value, but they face challenges in moving beyond experimentation. The pipelines that power these models need to run reliably at scale, bringing together data from many sources and reacting continuously to changing conditions.
This talk focuses on the design patterns for using Apache Airflow to support LLM applications created using private enterprise data. We’ll go through a real-world example of what this looks like, as well as a proposal to improve Airflow and to add additional Airflow Providers to make it easier to interact with LLMs such as the ones from OpenAI (such as GPT4) and the ones on HuggingFace, while working with both structured and unstructured data.
In short, this shows how these Airflow patterns enable reliable, traceable, and scalable LLM applications within the enterprise.
https://airflowsummit.org/sessions/2023/keynote-llm/
Webinar: Large Scale Graph Processing with IBM Power Systems & Neo4jNeo4j
We live in a profoundly connected world. From supply chains to payment networks to digital business and complex portfolios, our ability to understand and navigate not just data, but relationships inside the data, play an increasingly important role in all aspects of business. Highly connected value chains that generate massive volumes of connected data create an opportunity for graph analysis, which Gartner describes as "the single most single most effective competitive differentiator for organizations pursuing data-driven operations and decisions." This talk will introduce the power of graph databases and share how the latest IBM Power Systems offerings featuring the POWER8 processor and CAPI-attached Flash enable unique scaling, performance and price-performance advantages for Neo4j workloads.
Steve Fields from IBM presented these slides at the recent Stanford HPC Conference.
Learn more: http://www.open-power.org/
and
http://www.hpcadvisorycouncil.com/events/2014/stanford-workshop/agenda.php
Watch the video presentation: http://insidehpc.com/2014/02/14/openpower-foundation-overview/
Here are my notes from SAPTechEd 2007 - The entire presentations from the event can be purchased from www.sdn.sap.com. I learned a lot in 2007 - and then again in 2008. Great sessions. E-mail me if you see anything that isn't correct.
OAP: Optimized Analytics Package for Spark Platform with Daoyuan Wang and Yua...Databricks
Spark SQL is one of the most popular components in big data warehouse for SQL queries in batch mode, and it allows user to process data from various data sources in a highly efficient way. However, Spark SQL is a general purpose SQL engine and not well designed for ad hoc queries. Intel invented an Apache Spark data source plugin called Spinach for fulfilling such requirements, by leveraging user-customized indices and fine-grained data cache mechanisms.
To be more specific, Spinach defines a new Parquet-like data storage format, offering a fine-grained hierarchical cache mechanism in the unit of “Fiber” in memory. Even existing Parquet or ORC data files can be loaded using corresponding adaptors. Data can be cached in off-heap memory to boost data loading. What’s more, Spinach has extended the Spark SQL DDL, to allow users to define the customized indices based on relation. Currently, B+ tree and bloom filter are the first two types of indices supported. Last but not least, since Spinach resides in the process of Spark executor, there’s no extra effort in deployment. All you need to do is to pick Spinach from Spark packages when launching the Spark SQL.
sing corresponding adaptors. Data can be cached in off-heap memory to boost data loading. What’s more, Spinach has extended the Spark SQL DDL, to allow user to define the customized indices based on relation. Currently B+ tree and bloom filter are the first 2 types of index we’ve supported. Last but not least, since Spinach resides in the process of Spark executor, there’s no extra effort in deployment, all we need to do is to pick Spinach from Spark packages when launch the Spark SQL.
Spinach has been imported in Baidu’s production environment since Q4 2016. It helps several teams migrating their regular data analysis tasks from Hive or MR jobs to ad-hoc queries. In Baidu search ads system FengChao, data engineers analyze advertising effectiveness based on several TBs data of display and click logs every day. Spinach brings a 5x boost compared to original Spark SQL (version 2.1), especially in the scenario of complex search and large data volume. It optimizes the average search cost from minutes to seconds, while brings only 3% data size increase for adding a single index.
Consideration points for migrating from older pre-J2EE, J2EE 1.2-1.4, Java EE 5-6 to EE 7, and migration points especially for web front-end systems and back-ends. JSP to JSF, EJB to CDI with migration procedure details. Slide materials on Java Day Tokyo 2016.
Java EE 6 Adoption in One of the World's Largest Online Financial Systems (fo...Hirofumi Iwasaki
Financial companies need Java EE to power its business today. Rakuten Card, one of the largest credit card companies in Japan, adopted Java EE 6 for its online systems rearchitecture. You can learn why we choose Java EE, and our experiences and lessons we learned. This is the first disclosing of a large credit card company in Japan sharing their story.
How to start such a big project? Why we choose it, how we selected the in house development policies, educated ourselves, and developed the additional libraries? How to launch within only six months? What is the key factor driving them as 24/7 critical real financial systems successfully? How to migrate to EE 7 in the future? We’ll answer these questions and any that you may have.
This version is the exclusive session for JJUG CCC Fall 2014 in Japan, binding both JavaOne and OOW 2014 sessions.
Case Study of Financial Web System Development and Operations with Oracle Web...Hirofumi Iwasaki
To stay ahead of the technology curve, financial companies require the power, flexibility, and scalability of latest enterprise technologies for 24/7 services. Rakuten Card, one of the largest credit card companies in Japan, recently renewed its web front-end systems utilizing Java EE. This session provides answers to the following questions: Among the myriad of available technologies, why did it choose Oracle WebLogic and Oracle Exadata, managed by Oracle Enterprise Manager? How did it drive this huge project to completion in only six months, using only in-house development? What were the key success factors in launching and operating this mission-critical service? Hear about its extraordinary improvement results and how its selections are effective for financial enterprise systems.
Java EE 6 Adoption in One of the World’s Largest Online Financial Systems [Ja...Hirofumi Iwasaki
Financial companies need Java EE to power their business today. Rakuten Card, one of the largest credit card companies in Japan, adopted Java EE 6 for its online systems rearchitecture. Learn why it chose Java EE, and hear about its experiences and lessons learned. This is the first time a large credit card company in Japan is sharing its story. How do you start such a big project? Why did it choose Java EE? How did it select the in-house development policies, educate itself, and develop the additional libraries? How did it launch within only six months? What is the key factor driving 24/7 critical financial systems successfully? How do you migrate to Java EE 7 in the future? This presentation answers these questions and any others you may have.
Happy Java 8 release! But for Java EE 7? Is the SE 8 works for EE 7? this slide shows the current situation of applying SE 8 to EE 7. This is the revised version of "JJUG CCC 2014 Spring" session, for the "Java 8 workshop at Fukuoka".
Many enterprise systems build at 2000 - 2010 uses J2EE old specifications with Struts web framework. But nowadays J2EE improved as Java EE, with standard web framework JSF 2. With this slides you can learn how to migrate old-styled J2EE + Struts systems to sophisticated Java EE with JSF 2 specification. This slides was used in Java Day Tokyo 2014 C4 window, presented by the author. And some slides is specialized for Japanese enterprise systems, but the theme is very standard and for almost all J2EE users in the world.
Happy Java SE 8 was released! But for the Java EE?
This materials shows the current status of EE 6/7 with SE 8, and some limitation in current EE 7 app servers with 8.
This session materials is for the Japan Java Users Group (JJUG) CCC 2014 Spring session. #jjgc_ccc #ccc_r11
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Java EE 7 with Apache Spark for the World’s Largest Credit Card Core Systems [CON4998]
1. Java EE 7 with Apache Spark for the World’s
Largest Credit Card Core Systems
[CON4998]
Oct 4, 2017
Hirofumi Iwasaki
Ville Misaki
System Strategy Department,
Rakuten Card Co., Ltd.
2. 2
Speaker Biography
Hirofumi Iwasaki @HirofumiIwasaki
Group Manager
Technology Strategy Group, System Strategy Department,
Rakuten Card Co., Ltd.
Career
Planning, designing & implementation of huge enterprise systems for
financial, manufacturing and public systems with Java EE in Japan over
18 years.
Opus, Lectures, etc.
Conferences: OOW 2014, JavaOne 2015, 2014, Java Day Tokyo 2014-
2015, Rakuten Tech Conference 2013-2016, etc.
3. 3
Agenda
Part 1 – Perfect Design
1. About Rakuten Card
2. Background
Hardware
Software
Database
Part 2 – Harsh Reliability
3. Performance
4. Apache Spark
5. Judgement Day
6. Into the Future
8. 8
About Rakuten Card
Top-level credit card
company in Japan
Core of Rakuten eco
systems.
3rd position of total
transaction volume in 2016.
Growing rapidly.
9. 9
Conference session on JavaOne 2014, 2015
Shared with web front end
systems improvement
activities.
Based on Java EE 6
Started from Glassfish 3,
migrated to WebLogic
server 12c
In-house development
Great success
12. 12
Old core systems - Mainframe
Mainframe
Old architecture – over 20 years
High cost structure
Capacity and performance
limitation – no scale out
Low maintainability with piled
programs and old architecture
database "NDB"
Risk against vendor locked-in
Limitation of the security for the
significant data
14. 14
Limitation of old mainframe systems – Business
Old New
Cannot scale-out Apply scale-out enabled
architecture, with Oracle RAC
and clustered WebLogic server.
Low connectivity to other
systems
Apply Java EE and latest
protocol.
Less security management on
data
Apply Oracle database
security options.
No latest auto testing
environment
Introduce latest auto testing
environment.
15. 15
Limitation of old mainframe systems – Development
Old New
No local development Apply Java EE and Oracle DB
for local dev.
Hard to understand because
of its old architecture
Apply latest Java EE for its
basement.
Poor version control systems Introduce git server and issue
track systems.
No development community Apply Java EE and join open
community.
16. 16
Limitation of old mainframe systems – Operation
Old New
Poor automated operations Introduce Jenkins and
automations.
Manual error monitoring Include Zabbix monitoring to
cover the new core system.
Difficult to pin-point cause of
error
Use standard Java tools: stack
traces, Flight Recorder, etc.
Tons of unused codes Apply automated source code
analyzing tool.
17. 17
Phase of the improvement – 3.0
1.0
Initial phase
2.0 In-house
development
3.0
Standardization
4.0
Data Optimized
Outsource based,
just started.
Vendor locked-in.
In-house
development,
differentiate with
lower costs and
faster delivery.
Standardized
system
architecture, both
for hardware and
software.
Overwhelming
differentiation,
with enabling
architecture for
customer centric
service.
Achieved Next
Current Standard
Architecture
19. 19
Oracle Exalogic
+ Exadata + ZFS Servers
Big Improvement - Functionality: Hardware 1/2
19
Mainframe
Old New
Core
Systems
20. 20
Big Improvement - Functionality: Hardware 2/2
20
Oracle Exalogic
+ Exadata + ZFS Servers
Oracle Cloud Machine
(On premise private cloud)
For temporarily
request spiking
Low-Cost
Temp
Resource
New
Core
Systems
21. 21
Big Improvement - Reliability: Software Platform
Financial de-facto standard
Java EE compliant.
Matured, from 1997.
Financial de-facto standard
ISO/IEC 9075 SQL compliant
Matured, from 1983.
COBOL
Network
DB
App Server
Database
Old New
WebLogic Server
Oracle Database
22. 22
Big Improvement - Portability: Platform independent
Hardware, OS, app
server independent,
vendor free.
Mainframe,
Japanese COBOL,
vendor locked-in
Old New
Widfly
Payara
WebLogic
hp-ux
AIXSolaris
Linux
Windows
macOS
WebSphere
23. 23
Software Migration – Conversion
Japanese COBOL
Source code
Source code
Custom made
source code
converter
Convert from Japanese
COBOL to Java EE
Keep original core
business logic
24. 24
Software Migration – Conversion: Dual Source
From Web Systems,
For New Logic
COBOL
From Old System,
converted to Java
Ease of migration, resource re-use
Introduce power of Java EE
Introduce converter from YPS to Java
“Dual Source Architecture”
Japanese
COBOL
Japanese source code
Almost abandoned
No books, no community
Old New
25. 25
Big Improvement – Efficiency: API-nized
BIG-IP
Real-time Servers
(WebLogic)
Batch Servers
(Spark & Java)
Façade
Rich clients Façade
Façade
Intranet
External
Intra
Exadata
Mail
Form
BIG-IP
Façade
BIG-IP
External
customers
Scheduler
CoreBusinessLogicAPIs
Operation
terminal
Web
browser
Old New
26. 26
New Database
Overview of Data Conversion
ISAM
VSAM
NDB
Java
Business Logics
Japanese COBOL
business logics
Common Module
Data Accessor Common Module
Database Accessor
Migrate
Web
Database
Old New
27. 27
New Database
Schema Conversion Policy – From ISAM/VSAM
- Record Key
(Unique)
- Record Key
(Not Unique)
A_RDB_TABLE
----------------------------
- PRIMARY KEY
- OTHER COLUMN
Add unique
index.
Add index
only.
Old New
ISAM/VSAM
28. 28
Auto testing environment
3. Run tests
on staging environment
2. Execute auto testing
on several times
1. Register auto test scenarios
Automatic testing
using latest IBM
Rational test software.
Regression test
enabled when
something changed.
Reduce error
possibilities on
production release.
Testing
Server
35. 35
Performance – First Trial – Details
Start
Slow
Slow
Batches are run as networks
Hierarchical
Critical path
Time window
36. 36
Bad Performance – Causes
Automatic code conversion
COBOL program flow emulated in Java
COBOL-like data structures in Java
DB access logic
Business logic built on network DB
NDB and RDB are good at different tasks
37. 37
Bad Performance – Cause: COBOL Emulation
COBOL vs. Java
Goto statement – imitation is complex
Sub-program calls – heavy
No local variables – tight coupling
No libraries – copy&paste code
Few shared data structures – copy&paste definition
No shared enum/constant – magic numbers
38. 38
Bad Performance – Cause: COBOL Emulation
COBOL data structures
Fixed length – hard-coded
String-based
Data block inside program
Often thousands of fields
Hierarchical fields
Content is joined/split automatically
Variable namespace under each parent
Even five levels deep
40. 40
Bad Performance – Cause: NDB Emulation
Logic optimized for NDB
Read sequentially
Data pre-sorted
Data pre-formatted
Emulate in RDB
Uphill battle
NDB RDB
Search Slow Fast
Sequential Access Fast Slow
Sorting Slow Fast
Formatting Fast Slow
41. 41
Performance – Must Improve
New system must be faster
Time until launch:
1 year
42. 42
Performance – Solutions?
Options?
Redesign and re-implement from scratch
Not feasible
Optimize framework
Limited effectiveness
Parallelize batches
Elastic brute-force
46. 46
Apache Spark – Challenges
1. Making business logic parallel
Independent processing
2. I/O
Data transferred over network
3. Data ordering
Shuffles
47. 47
Apache Spark – Challenges: Independent Processing
Problem: input data rows are not independent!
Red flags
Fields not initialized for each row
Code forks early (header & data?)
Legacy code analysis
Refactor
Fields to local variables
Extract data structures
Initialize data for each row
Run & see
321
3
2
1 Reference?
48. 48
Apache Spark – Challenge 1: Independent Processing, Solutions
1. Group related rows together
2. Process header rows separately
3. Modify business logic
49. 49
Apache Spark – Challenge 1: Independent Processing, Solution 1
Group related rows together
Custom data reader
Multiple rows behave like one row
Process each group row in a loop, on
the same node
Pro
Business logic not modified
Con
Relationships may be too complex
Groups may grow too big
ID Data
1 …
1 …
2 …
3 …
3 …
4 …
50. 50
Apache Spark – Challenge 1: Independent Processing, Solution 2
Process header rows separately
Run business logic for header rows first
Collect result in NavigableMap
Run business logic for data rows
Initialize data from previous header
floorKey(dataRowIndex)
Pro
Minimal changes to business logic
Con
Relationships may be too complex
ID Type Data
1 Head …
1 Data …
1 Data …
2 Head …
2 Data …
3 Head …
3 Data …
51. 51
Apache Spark – Challenge 1: Independent Processing, Solution 3
Modify business logic
Row relationship could be removed, if it’s
Unintentional (a bug)
For unnecessary optimization
Data that could be retrieved otherwise
Pro
High chance for good performance
Con
High chance for new bugs
52. 52
Apache Spark – Challenge 2: I/O
Input and output data must be shared
Network storage
How long does it take to copy 200 GB?
Transfer
Process
Transfer
Process
Transfer
Heavy
Process
Heavy
ProcessTransfer
Transfer Process
53. 53
Spark – Challenges – Challenge 3: Data Ordering
Sequential batches rely on ordering
Tricky to keep in Spark
Safe operations: map, filter, zip
Unsafe operations: join, group, sort
Process
Process
Process
Process
Process
Process
Shuffle
Process
Process
Process
Shuffle
54. 54
Spark Takeaways
Good for
Heavy processing
Independent input data records
One input, multiple outputs
Unordered data
Not so great for
Little processing
Dependencies between data records
Merging multiple data sources
60. 60
Next Phase
1.0
Initial phase
2.0 In-house
development
3.0
Standardization
4.0
Data Optimized
Outsource based,
just started.
Vendor locked-in.
In-house
development,
differentiate with
lower costs and
faster delivery.
Standardized
system
architecture, both
for hardware and
software.
Overwhelming
differentiation,
with enabling
architecture for
customer centric
service.
Achieved Next
Current Standard
Architecture