12 years of IT experience in Quality Assurance, Business Consulting, Process Improvements and experience in Testing, Maintenance and Mainframe Production Support, Project Management, Leadership, Planning and Execution
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Bharath hr
1. BHARATH H R
Mob: (+91) 9900199305 • Email: bharath83.hr@gmail.com
LinkedIn: https://in.linkedin.com/in/bharath-h-r-524ab1b9
SUMMARY
• 9.5 years of experience in the field of IT with focus on QA and Software Testing. Possess
strong knowledge in all phases of SDLC with in-depth and hands on experience in
various stages of Software Testing phase.
• Proficient in all phases of testing life cycle from Test Planning to Reporting of Test
Results. Experienced in defining Testing Methodologies, Designing Test Plans and Test
Cases.
• Expertise in Functional, Manual and Automation, Data Warehouse ETL Testing.
• Good working knowledge of QTP on key word driven framework.
• Good experience in managing test environment, in integration testing activities and in
coordinating with teams for build deployment.
• Currently holding a valid H1B invite (form I-797B, Notice of action). Looking out an
employer who is willing to transfer this H1B invite from the current employer.
• Good Domain Knowledge Investment Banking with Agile - SCRUM methodologies.
TECHNICAL SKILLS
Functional Testing Tool: QTP 11, Soap UI, FIX Protocol, Selenium
Defect Tracking Tools: HP Quality Center, JIRA
Operating System: Windows XP/2000, UNIX
Programming Languages: C, C++, VB Script, Shell, Perl
RDBMS/ Database: Oracle 9i/10g, Sybase, TOAD, PL/SQL, DBartisan
FUNCTIONAL AREAS
Investment Banking – Order management System (OMS), Life cycle of Trade and Settlements.
Finance – Mortgage Loans and Secondary Loans
Telecom billing – CDR rating, Postpaid to Prepaid migration
2. PROJECT DETAILS
COMPANY: DELL SERVICES
PROJECT: UBS –SWISS CASH QA (APRIL 2013 – TILL DATE)
SR. QA ANALYST
CLIENT: UBS
DESCRIPTION:
TOPAZ is a trading front office application within UBS to handle the daily trading needs of UBS
for various instruments such as Securities, Bonds, Equity, Funds and Derivatives. Orders are
flown from different external clients using FIX and SWIFT protocols and placed to market with
inbuilt intelligence to block multiple orders, execute them in a efficient and cost effective
manner. Traders/Brokers are able to track the progress of each order throughout the system using
TOPAZ.
CURRENT ROLES:
• Automation and Manual Testing.
• Understanding the Functional Specifications.
• Creation & Review of User Stories.
• Deriving Test Cases from User Stories.
• Creation & Review of Front-to-Back Scenarios using FIX protocol.
• Preparation of Configuration & Test Data.
• Test Scenario Execution.
• Automation of Regression Test Cases for Topaz.
• Automation Execution and Scripts debugging using QTP11.
COMPANY: DELL SERVICES
PROJECT: GLOBAL GENERAL LEDGER TESTING TEAM (AUG 2009 – APRIL 2013)
QA ANALYST
CLIENT: UBS
DESCRIPTION AND ROLES:
• Responsible for conducting pre-release activities ( environment readiness checks ) across
all regions in accordance with planned UAT cycles and release dates covering the whole
fiscal year.
• As a part of these UAT cycles we are actively in involved in tracing errors, monitoring /
rerunning / terminating batch runs and data loads. Have to actively interact with
3. infrastructure teams such as UNIX and DB services, to assist in resolving connectivity
issues, within target times and hence abiding to SLAs for completion of UAT batches /
test requests.
• We supported global regions from a single location (Singapore) with 24 x 5 work
environment, handling testing requests from mainly BUCs (Business User Controllers),
test managers and development teams across all regions.
• Strong understanding of Incident Management, Escalation Management and Service
Level Management. Worked extensively on ITIL Tools – Remedy, JIRA, MERCURY –
IT Governance tool, HP Quality Center.
COMPANY: MAHINDRA SATYAM
PROJECT: KNOWLEDGE WAREHOUSE BUSINESS TRANSFORMATION (AUG 2008 – AUG 2009)
TEST ANALYST
CLIENT: APPLIED MATERIALS
DESCRIPTION AND ROLES:
• KWBT is a reporting database specifically designed to support ad-hoc user reporting
from a front-end query tool such as Brio. It acts as a decision support system for the top
management to improve their business over the years.
• Responsible for creating reports or enhance already existing reports, using Brio Designer,
as requested by client. Schedule and monitor jobs in Brio Broadcasting Server. Monitor
jobs that run in Production Server.
COMPANY: TECHNO TREE
PROJECT: INTEGRATION AND DEPLOYMENT (JUNE 2006 – AUG 2008)
GRADUATE ENGINEER TRAINEE
CLIENT: MTN
DESCRIPTION AND ROLES:
• Worked in deployment and integration at on-site for testing a Telecom Billing
Product - @billity. Conducted training sessions on various modules (Customer Care,
Registration and Activation, Trouble Ticket etc.) to MTN users. Gained knowledge in
uploading CDRs (Call Detailed Records) and Rating the uploaded CDRs. Exposed in
addressing user queries and formulating solutions to them.
• At off shore, involved in analysis of project requirements and change requests.
Designing test cases and perform unit testing. Reviewing design documents and code.
Setting up of application environments which included application servers and database
configurations.
4. • Worked for a new development project to build a portal to fulfill the client’s need of
having a single system which integrates key services provided by our billing system.
Major service operations include 1) Users/User groups management, 2) Password
management (password generation, encryption and decryption), 3) Admin privileges
and User rights management, 4) Centralized logger, which logs and manages the
various events of different applications.
TECHNICAL SKILLS
• OPERATING SYSTEMS: Windows, Unix, LINUX
• PROGRAMMING LANGUAGES: PL/SQL, UNIX, Java
• DBMS/RDBMS: SQL, PL/SQL, Oracle 10g.
• TOOLS and UTILITIES: QTP, Selenium, TOAD, DB-Artisan, REMEDY, QC,
JIRA.
• CERTIFICATIONS: DELL certified Agile Scrum Practitioner.
• DOMAIN KNOWLEDGE: Investment Banking, Telecom
EDUCATION DETAILS
• Bachelor’s in Computer Science and Engineering (CSE) - Vishweshwaraiah
Technological University, Belgaum, Karnataka - 67.16% - Passed out on September 2005
PERSONAL DETAILS
• Date of Birth: 13th October 1983
• Hobbies: Cricket, CROSSWORDS, Philately, Music and Exploring LINUX
• Permanent Address: #7 Kalpavruksha , 1st
Bloclk 1st J Main, 2nd Stage, Nagarbhavi,
Bangalore – 560072
• Contact Numbers: Mob : +91 – 9900199305 Home: +91-80-23486613
• LinkedIn : https://in.linkedin.com/in/bharath-h-r-524ab1b9