The platform is an advanced analytical and collaborative application to help credit players take
proactive actions and reduce significant losses in event of default or winding up by the borrowers.
The Kentico 12 audit norms are centered on a well-defined framework to sustain the entire application and comply with the desired standards of performance.
https://www.raybiztech.com/blog/vasu-yerramsetti/how-security-audits-improve-reliability-in-kentico
Application architecture is required to ensure the structural integrity of an application portfolio, but it can seem impossible to ensure adherence to these standards. CAST changes all of that by automatically analyzing applications across all tiers and languages to provide Architects with the actionable metrics and information needed to assess the how well the architectural designs, rules and standards have been followed.
Provides a framework of 12 parameters to evaluate Martech. Outlines features, user experience, security, flexibility, product roadmap, interoperability, ecosystem, set-up costs, license costs, maintenance costs, contracts, SLAs and Cultural Fit
The Next Normal: CTEK's New Services to Support Adapting in 2020 & BeyondSophiaPalmira
The Next Normal: CTEK's New Services to Support Adapting in 2020 & Beyond presented by Ben Dankers, SVP Security & Privacy Services for CynergisTek, and moderated by Mike McCune.
Accelerate Healthcare Technology Modernization with Containerization and DevOpsCitiusTech
As healthcare industry evolves, organizations and technology companies need to address issues around quality, consistency, and speed to market initiatives. DevOps with containerization gives them a strategic advantage as they build and accelerate modernization.
The Kentico 12 audit norms are centered on a well-defined framework to sustain the entire application and comply with the desired standards of performance.
https://www.raybiztech.com/blog/vasu-yerramsetti/how-security-audits-improve-reliability-in-kentico
Application architecture is required to ensure the structural integrity of an application portfolio, but it can seem impossible to ensure adherence to these standards. CAST changes all of that by automatically analyzing applications across all tiers and languages to provide Architects with the actionable metrics and information needed to assess the how well the architectural designs, rules and standards have been followed.
Provides a framework of 12 parameters to evaluate Martech. Outlines features, user experience, security, flexibility, product roadmap, interoperability, ecosystem, set-up costs, license costs, maintenance costs, contracts, SLAs and Cultural Fit
The Next Normal: CTEK's New Services to Support Adapting in 2020 & BeyondSophiaPalmira
The Next Normal: CTEK's New Services to Support Adapting in 2020 & Beyond presented by Ben Dankers, SVP Security & Privacy Services for CynergisTek, and moderated by Mike McCune.
Accelerate Healthcare Technology Modernization with Containerization and DevOpsCitiusTech
As healthcare industry evolves, organizations and technology companies need to address issues around quality, consistency, and speed to market initiatives. DevOps with containerization gives them a strategic advantage as they build and accelerate modernization.
Managed Detection and Response (MDR) WhitepaperMarc St-Pierre
Managed detection and response (MDR) solutions benefit from investigative capabilities, particularly as derived and evolved from the digital forensic community. Buyers should thus include investigative experience as a selection factor when reviewing MDR offerings.
Whitepaper from TAG Cyber and OpenText on Managed Detection and Response (MDR): Investigative Capability as a Key Selection Factor.
SpiderLogic provides software architecture, development and consulting services through its development labs located in Milwaukee, WI, Minneapolis, MN, Pune and Bangalore, India.
Choosing the right machine learning development service for healthcare industryjulliepotter
Machine Learning development in healthcare is like an ocean waiting to be explored. Healthcare industry will benefit as will people by deeper engagement of machine learning development services in this segment.
Evaluating open source projects is a permanent challenge that OW2 has chosen to meet by defining a unique composite indicator now applied to its projects. This indicator facilitates the evaluation of open source projects from the point of view of corporate information systems managers. The growth of open source software is taking place in two main directions. First, by moving up the layers of information systems: from the operating system to the business applications, open source software is increasingly used by non-IT specialists. Second, by becoming “mainstream”, open source is reaching out to decision-makers who are unfamiliar with open source. For these new users, educated in the commercial practices of proprietary software vendors, open source remains a counter-intuitive model; its technical, legal and community specificities are a source of uncertainty that is not very favourable to positive decisions. Mainstream decision-makers must hear a language they understand. This is the role of the Market Readiness Levels (MRL) method developed by OW2 for evaluating open source projects. With MRL, decision-makers have a familiar indicator that positions open source projects according to the “business” decision criteria they are used to. Open source is moving towards them.
The talk begins with a presentation of the MRL method, its three levels of analysis and the hundred or so criteria taken into account. It then gives the example of a few OW2 projects evaluated by the method and explains what benefits it brings to the development teams, but also to the end-users and the open source in general
Trust and Transformation: Peter Coffee at Cloud@KM 20110503Peter Coffee
Presentation by Peter Coffee of salesforce.com to the Cloud Computing @ KM conference in Tysons Corner, Virginia, 3 May 2011, concerning cloud adoption in public sector and enterprise with attention to security, reliability, developer productivity and facilities for using social media based on Platform as a Service and a multi-tenant architecture
Connected Risk helps you mitigate and manage the risks that matter, and supports several modules – Risk Management, Compliance Management, Audit Management, Regulatory Change Management, Model Risk Management – that can be linked together on the platform. Connected Risk provides the flexibility to consolidate multiple risk processes with internal and external risk data sources, where your organization can aggregate and break down data coming from varying sources, standardize it, structure and tag it - giving all of it a shared taxonomy.
Static Testing: We Know It Works, So Why Don’t We Use It?TechWell
We know that static testing is very effective in catching defects early in software development. Serious bugs, like race conditions which can occur in concurrent software, can't be reliably detected by dynamic testing. Such defects can cause a business major damage when they pop up in production. Despite its effectiveness in early defect detection and ease of use, static testing is not very popular among developers and testers. Meena Muthukumaran discusses reasons why static testing is not commonly used or not used optimally: lack of awareness, lack of time, and myths about cost and effort requirements. Meena explains ways to perform effective static testing—identifying your needs, shortlisting the tools based on your needs, creating awareness and a culture for proactively eliminating defects early in the lifecycle, and encouraging effective usage of static testing. She offers various implementation solutions to suit different development methodologies and ways to measure the benefits realized with static testing.
Maclear’s IT GRC Tools – Key Issues and TrendsMaclear LLC
Maclear specializes in enterprise governance, risk and compliance (eGRC) solutions. The IT GRC Solution integrates various business functions such as IT governance, policy management, risk management, compliance management, audit management, and incident management. Enables an automated and workflow driven approach to managing, communicating and implementing IT policies and procedures across the enterprise
Read More at: http://www.maclear-grc.com/
Vertexplus' video analytics solution provides the user a highly reliable, truly versatile, scalable video analytics and management suite, adaptable to diverse scenarios & operational challenges.
Managed Detection and Response (MDR) WhitepaperMarc St-Pierre
Managed detection and response (MDR) solutions benefit from investigative capabilities, particularly as derived and evolved from the digital forensic community. Buyers should thus include investigative experience as a selection factor when reviewing MDR offerings.
Whitepaper from TAG Cyber and OpenText on Managed Detection and Response (MDR): Investigative Capability as a Key Selection Factor.
SpiderLogic provides software architecture, development and consulting services through its development labs located in Milwaukee, WI, Minneapolis, MN, Pune and Bangalore, India.
Choosing the right machine learning development service for healthcare industryjulliepotter
Machine Learning development in healthcare is like an ocean waiting to be explored. Healthcare industry will benefit as will people by deeper engagement of machine learning development services in this segment.
Evaluating open source projects is a permanent challenge that OW2 has chosen to meet by defining a unique composite indicator now applied to its projects. This indicator facilitates the evaluation of open source projects from the point of view of corporate information systems managers. The growth of open source software is taking place in two main directions. First, by moving up the layers of information systems: from the operating system to the business applications, open source software is increasingly used by non-IT specialists. Second, by becoming “mainstream”, open source is reaching out to decision-makers who are unfamiliar with open source. For these new users, educated in the commercial practices of proprietary software vendors, open source remains a counter-intuitive model; its technical, legal and community specificities are a source of uncertainty that is not very favourable to positive decisions. Mainstream decision-makers must hear a language they understand. This is the role of the Market Readiness Levels (MRL) method developed by OW2 for evaluating open source projects. With MRL, decision-makers have a familiar indicator that positions open source projects according to the “business” decision criteria they are used to. Open source is moving towards them.
The talk begins with a presentation of the MRL method, its three levels of analysis and the hundred or so criteria taken into account. It then gives the example of a few OW2 projects evaluated by the method and explains what benefits it brings to the development teams, but also to the end-users and the open source in general
Trust and Transformation: Peter Coffee at Cloud@KM 20110503Peter Coffee
Presentation by Peter Coffee of salesforce.com to the Cloud Computing @ KM conference in Tysons Corner, Virginia, 3 May 2011, concerning cloud adoption in public sector and enterprise with attention to security, reliability, developer productivity and facilities for using social media based on Platform as a Service and a multi-tenant architecture
Connected Risk helps you mitigate and manage the risks that matter, and supports several modules – Risk Management, Compliance Management, Audit Management, Regulatory Change Management, Model Risk Management – that can be linked together on the platform. Connected Risk provides the flexibility to consolidate multiple risk processes with internal and external risk data sources, where your organization can aggregate and break down data coming from varying sources, standardize it, structure and tag it - giving all of it a shared taxonomy.
Static Testing: We Know It Works, So Why Don’t We Use It?TechWell
We know that static testing is very effective in catching defects early in software development. Serious bugs, like race conditions which can occur in concurrent software, can't be reliably detected by dynamic testing. Such defects can cause a business major damage when they pop up in production. Despite its effectiveness in early defect detection and ease of use, static testing is not very popular among developers and testers. Meena Muthukumaran discusses reasons why static testing is not commonly used or not used optimally: lack of awareness, lack of time, and myths about cost and effort requirements. Meena explains ways to perform effective static testing—identifying your needs, shortlisting the tools based on your needs, creating awareness and a culture for proactively eliminating defects early in the lifecycle, and encouraging effective usage of static testing. She offers various implementation solutions to suit different development methodologies and ways to measure the benefits realized with static testing.
Maclear’s IT GRC Tools – Key Issues and TrendsMaclear LLC
Maclear specializes in enterprise governance, risk and compliance (eGRC) solutions. The IT GRC Solution integrates various business functions such as IT governance, policy management, risk management, compliance management, audit management, and incident management. Enables an automated and workflow driven approach to managing, communicating and implementing IT policies and procedures across the enterprise
Read More at: http://www.maclear-grc.com/
Vertexplus' video analytics solution provides the user a highly reliable, truly versatile, scalable video analytics and management suite, adaptable to diverse scenarios & operational challenges.
Getting Better at Risk Management Using Event Driven Mesh Architecture - Ragh...Nordic APIs
A presentation given by Raghavan Sadagopan, Sr. Director from CapitalOne & Lakshmi Narayana, Sr. Lead Software Engineer from CapitalOne, at our 2024 Austin API Summit, March 12-13.
Session Description: Managing Risk is critical to the success of an organization. Managing Risks starts with identifying potential Risks which in the digital world are signals emanating from varying source systems. Identifying potential risks real-time enables organizations to mitigate / better prepare for potential exposures. The session will share our point of view on implementing an API centric event mesh architecture that routes events in real-time through a scalable and resilient cloud-native service on AWS.
Insights Success is The Best Business Magazine in the world for enterprises. Being a platform, it focuses distinctively on emerging as well as leading fastest growing companies, their confrontational style of doing businesses and the way of delivering effective and collaborative solutions to strengthen market share. Here, we talk about the leader’s viewpoints & ideas, latest products/services, etc. Insights Success magazine reaches out to all the ‘C’ Level Professionals, VPs, Consultants, VCs, Managers, and HRs of various industries.
Agile Mumbai 2022
Real-Time Insights and AI for better Products, Customer experience and Resilient Platform
Balvinder Kaur
Principal Consultant, Thoughtworks
Sushant Joshi
Product Manager, Thoughtworks
Selecting an App Security Testing Partner: An eGuideHCLSoftware
In the age of digital transformation, global businesses leverage web application scanning tools to shape innovative employee cultures, business processes, and customer experiences. The surge in remote work, cloud computing, and online services unveils unprecedented vulnerabilities and threats.
Learn more: https://hclsw.co/ftpwvz
Procuring an Application Security Testing PartnerHCLSoftware
Procuring an Application Security Testing Partner is crucial for safeguarding digital assets. An Application Security Testing Partner specializes in conducting comprehensive assessments using keywords like vulnerability scanning, penetration testing, code review, and threat modeling. Their expertise ensures your applications are fortified against cyber threats, providing peace of mind in an increasingly interconnected digital landscape.
Learn More: https://hclsw.co/ftpwvz
Real time insights for better products, customer experience and resilient pla...Balvinder Hira
Businesses are building digital platforms with modern architecture principles like domain driven design, microservice based, and event-driven. These platforms are getting ever so modular, flexible and complex.
While they are built with architecture principles like - loose coupling, individually scaling, plug-and-play components; regulations and security considerations on data - complexity leads to many unknown and grey areas in the entire architecture. Details on how the different components of this complex architecture interact with each other are lost. Generating insights becomes multi-teams, multi-staged activity and hence multi-days activity.
Multiple users and stakeholders of the platform want different and timely insights to take both corrective and preventive actions.Business teams want to know how business is doing in every corner of the country near real time at a zipcode granularity. Tech teams want to correlate flow changes with system health including that of downstream stability as it happens.Knowing these details also helps in providing the feedback to the platform itself, to make it more efficient and also to the underlying business process.
In this talk we intend to share how we made all the business and technical insights of a complicated platform available in realtime with limited incremental effort and constant validation of the ideas and slices with business teams. Since the client was a Banking client, we will also touch base handling of financial data in a secure way and still enabling insights for a large group of stakeholders.
We kept the self-service aspect at the center of our solution - to accommodate increasing components in the source platform, evolving requirements, even to support new platforms altogether. Configurability and Scalability were key here, it was important that all the data that was collected from the source platform was discoverable and presentable. This also led to evolving the solution in lines of domain data products, where the data is generated and consumed by those who understand it the best.
Tools & Techniques for Addressing Component Vulnerabilities for PCI ComplianceSonatype
Recent revisions to the Payment Card Industry (PCI) guidelines now require organizations to address potential vulnerabilities caused by use of open source components in their applications.
Detailed case study on enterprise-grade data management application, custom-designed to cater the backend of an analytics-based fluid composition software deployed across renowned oil and gas industry clients.
Sales Enablement App for Retail domain to assist salespersons with the capabilities of NPTB (Next Product to Buy) model, forecasting consumer behavior effectively.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Key Trends Shaping the Future of Infrastructure.pdf
Detect Early Stress in Borrower Accounts
1. Early Warning Platform: Detect Early Stress
in Borrower Accounts
About the Client
It is an integrity risk analytics start-up, specialising in monitoring creditworthiness of the borrowers
and identifies early warning signals i.e. red flags on near real time basis facilitating lenders in
making informed decisions
Client’s Product Offerings
The platform is an advanced analytical and collaborative application to help credit players take
proactive actions and reduce significant losses in event of default or winding up by the borrowers.
This platform minimizes the inherent information asymmetry between lenders and borrowers and
enables early identification of potential problems by providing incipient signals of stress in borrower
accounts. It helps lenders to take efficient remedial measures
Global IT Solutions
An ISO 9001:2015 & ISO/IEC 27001:2013
2. Synthesizes data from multiple sources internal to the lender as well as numerous external
sources in varying formats (structured, unstructured and semi-structured)
In addition to robust appraisal processes, an ‘Early Warning Mechanism’ is the most effective
way of preventing loan defaults
Designed to provide an end-to-end 360 view of borrowers by aggregating relevant information
of the entire value chain of the borrower and affiliated parties on a common platform
Integrates not only the internal data from banks but also the various external data points
and provides insights based on collated and synthesized data
Key Challenges
Preparing structured data for analytical purpose from numerous unstructured & semi
structured external data sources to draw insights
Integration of various different technologies like Python, Java, Neo4j and SQL Server together
Writing various fraud detection algorithms over a wide variety of data
Identifying most appropriate charts to depict the generated information and external data to
make it more understandable and less complex
Avoiding unnecessary and less relevant noise and highlight the critical warning signals
A scalable analytics platform to identify risk indicators in near real time and empower decision
makers to capitalize on risk events
Bake extensive domain knowledge & experience of forensic and fraud detection into a
platform that enables Predictive Modelling with AI/Machine Learning/Deep Learning
Relationship mining and link analysis by constructing a proprietary relationship model with
interconnected data
Enable timely and cost effective identification of potential ‘Willful Defaulters’
Solution Approach & Methodology
Analyzing and understanding the product requirements and financial risk management
domain using readily available third party information sources
Designing the platform architecture based on discussions with the client’s technical team
and integrate the data sources into the same
Execution of pre-defined fraud investigation rules by creating ‘Rule based Engine’ in Python
over unstructured and structured data
Identify the abnormalities and early warning signals by the analysis and pattern matching
Represent the findings in one integrated application to visualize any deviation and take
corrective measures
3. Reports are saved in hard copy for loan appraisal
to help end users take decisions
Flexibility to capture more data for specific
entities, on request
Interactive dashboards make the data easily
interpretable without any expertise in the financial
forensic investigation domain
Graph database is being used to analyze risk
relationships of single entity identified as willful
defaulter or risk in whole ecosystem
Entity score displays the complete picture of
borrower's business dynamics and is updated on
weekly basis
Risk identification based on mapping the internal
bank data with the secondary available data
Key
Highlights
of the
Approach
Achievements
The team not only developed the prototype but also released the first version of the MVP,
which has been successfully deployed in a blue chip bank in India
Successfully architected and integrated the multi technology solution forming the unified
platform
Developed the algorithms for ‘Relevancy’ scoring of the external media content
Successfully implemented algorithms for text mining and developed a proprietary relationship
model
Developed algorithms to implement the scoring model to classify accounts based on risk in
terms of ability to pay on time
Implementation of analytics over integrated external data sources of different nature through
APIs, crawling public data and lenders internal data in a very short duration
Conducted the platform performance test for over a 100 companies put together on platform
On time releases of prototype as well as the first release of MVP
Implementation of business rules framework for various business rules that played a major
role in detection of earning signals
Rule based engine to execute pre-defined forensic
rules automatically on data captured in an
incremental way
4. Technology Deployed
Authentication and
Authorization Frameworks
Spring
Presentation Tier
Frameworks
JSP, jQuery, High Charts, D3JS,
HTML5, CSS, JS
Business Tier Frameworks Spring
Persistence Framework Hibernate and Spring Data Neo4j
Application Servers Tomcat
RDBMS Systems Microsoft SQL Server
NoSQL/Graph DB Systems Neo4j
Project Highlights
Client Integrity Risk
Analytics Start-up
Location India
Industry Financial Sector
Duration 1+ year
Team
Size
5 people
Delivery
Model
Hybrid
Engagement
Model
Retainership
Operating Systems Windows
Integration Services Python, Talend Data Integrator,
Spring Data, Spring MVC, Spring
Security, Hibernate, Spring
Schedulers, Velocity framework
Automation Tool Jenkins (CI and CD)
Build Tool Maven
About PSI
Pratham Software (PSI) is a global IT services company (with established ISO 9001:2015 & ISO/IEC 27001:2013
practices) providing software product development, consulting and outsourcing solutions to enterprises worldwide.
While providing a wide range of solutions, we focus on Outsourced Product Development (OPD), Business Process
Management (BPM), Application Development and Maintenance (AMD) and Content Engineering. Our extensive
experience in OPD helps us build strong relationships with Independent Software Vendors (ISVs), as we work with
them throughout the product development lifecycle. In terms of technology and platform, we work across all major
technologies such as Microsoft, Java and Open source and have capabilities and experience in developing solutions
for web, mobile, Cloud and social media. For Enterprise customers, in addition to Process Automation, we also offer
development and support services in BI and DWH.
US Office: 21860, Via Regina, Saratoga, California 95070 USA | Ph:(408) 898-4846 | Fax: (408) 867-0666
India Development Center: G1-265-266, RIICO Industrial Area, EPIP, Sitapura, Jaipur 302022, India | Ph: (91)141-6690000
www.thePSI.com
All PSI products and services mentioned herein as well as their respective logos are trademarks or registered with PSI. All other product and service names mentioned are the
trademarks of their respective companies. Data contained in this document serves informational purposes only.The content is subject to change without notice. This content
is provided by PSI for informational purposes only, without representation or warranty of any kind, and PSI shall not be liable for errors or omissions with respect to the content.