The document summarizes several of the user's previous projects at Wipro Technologies working with clients such as Microsoft, Apple, DHL, and Motorola. It provides details on the project names, locations, clients, technologies used, and the user's responsibilities for each project. Projects involved tasks such as analyzing DryadLINQ on Windows HPC Server, developing an API for Apple's cloud platform, implementing security protocols for Motorola, and validating technical documents for Microsoft protocols.
At SQA Solution the objectives of SAP System Testing are to verify that the installed system, which includes the SAP software, custom code and manual procedures, perform per business requirements:
Executes as specified and without error.
Validates with the users and management that the delivered system performs in accordance with the stated system requirements.
Ensures that the system works with other existing systems, including but not limited to interfaces, conversions, and reports.
At SQA Solution the objectives of SAP System Testing are to verify that the installed system, which includes the SAP software, custom code and manual procedures, perform per business requirements:
Executes as specified and without error.
Validates with the users and management that the delivered system performs in accordance with the stated system requirements.
Ensures that the system works with other existing systems, including but not limited to interfaces, conversions, and reports.
With every passing day, organizations are becoming more and more mindful about the performance of their Software Products. However, most of them still on look-out for the basics of Performance Engineering.
According to a recent study by Gartner, fixing performance defects near the end of the development cycle costs 50 to 100 times more than the cost required for fixing it during the early phase of development. Hence, if a product suffers from serious performance issues it can be completely scrapped.
Performance Engineering ensures that your application is performing as per expectations and the software is tested and tuned to meet specified or even the unstated performance requirements.
We present you with a webcast on Performance Engineering Basics that would walk you through the elements and process of performance engineering, and also offers a methodical process for the same.
It also offers details on a load testing tool, and describes how best to utilize it.
Visit http: http://www.impetus.com/featured_webcast?eventid=10 to listen to the entire webcast (20 minutes).
OR
To post any queries on Performance Engineering, write to us at isales@impetus.com
For case studies and articles on performance engineering please visit: http://www.impetus.com/plabs/casestudies?case_study=&pLabsClustering.pdf=
An Introduction to Software Performance EngineeringCorrelsense
Software performance engineering is becoming increasingly important to businesses as they look to improve the non-functional performance of applications and get more out of IT investments. By leveraging performance engineering techniques, IT professionals can be indispensable in building and optimizing scalable systems. This
introductory course will teach you the essentials of software
performance engineering including :
• The performance challenges faced by Enterprise IT today
• What is software performance engineering (SPE)?
• Best practices for building scalable software systems
• The approaches to integrating SPE into IT project lifecycles
• Common frameworks for measuring application performance and service levels
• The impact of SPE on software developers, testers, capacity planes,
and other IT professionals
• Case studies from the finance, retail, and insurance industries
Instructor: Walter Kuketz, SVP and CTO, Collaborative Consulting
This training is sponsored by Correlsense, Collaborative Consulting,
and New Horizons
EMC Documentum xCP 2.x Tips for application migration v1.1Haytham Ghandour
This document addresses some of the
common problems faced during Application
migration. It covers other related topics like
Type Adoption, Import types from Composer
Projects, Interoperability and Reverse
Interoperability.
Application Performance: 6 Steps to Enhance Performance of Critical SystemsCAST
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
This white paper presents a six-step Application Performance
Modeling Process using software intelligence to identify potential performance issues earlier in the development lifecycle. Enriching dynamic testing with structural quality analysis gives ADM teams insight into the performance behavior of applications by highlighting critical application performance issues, especially when combined with runtime
information.
By adding structural quality analysis, ADM teams learn important information about violations of architectural and programming best practices earlier in the development lifecycle than with a pure dynamic testing approach. Structural quality analysis as part of the performance modeling process allows for fact-based insight into application complexity (e.g. multiple layers, dynamics of their interactions, complexity of SQL, etc.) and allows ADM managers to anticipate evolution of the runtime context (e.g. growing volume of data, higher number of transactions, etc.). The combined approach results in better detection of latent application performance issues within software. Resolving application performance issues early in the development cycle, these alerts help to not only save money but also prevent complete business disruptions.
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Assessing application development like the rest of the business
Well overdue, it is time to measure application development and
maintenance the same way as the rest of the business, based on not just how much work someone does, but how well they do the work. As we know, looking to see if the code works as expected is only a single measurement. Knowing how easy it will be to maintain over time, how flexible it is to change as required by business changes, how quickly new team members can understand the code and get working on it and how easily the application can be tested are just some of the things that we need to look at in order to understand the real quality of the work being done by application development teams. When these measurements are combined with ways of counting the productivity (quantity) of development teams, we can get a real understanding of how well the teams are performing and what return is being realized from the investment. These measurements can be assessed both for in-house development organizations as well as the work being done by outsourcers.
The applications delivered by IT are a significant differentiator between competitors and therefore it needs to be managed as a core business process. Held up against corporate standards and no matter how or where the development work is done, it must be done well and the resulting applications need to be able to withstand time.
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Six steps-to-enhance-performance-of-critical-systemsCAST
To view more ways to improve application performance: https://bit.ly/2OZGxgf
This white paper presents a six-step Application Performance
Application Development and Maintenance (ADM) teams often face performance issues in applications during the testing phase when an application is almost complete which results in delays and business loss. The performance modeling process using software Intelligence to identify and eliminate performance flaws before they reach to production level.
By adding dynamic performance testing with automated structural quality analysis, ADM team get early and important information that might be missed with a pure dynamic approach such as inefficient loops or SQL queries and improve the development lifecycle. The combined approach will result in detection of performance issues within the application software.
This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues at the earlier stage in the development lifecycle which results in reducing the cost but also intercept business from any kind of downfall.
This white paper helps to understand different approaches of structural quality analysis and illustrate the modeling process at work.
To view more ways to improve application performance: https://bit.ly/2OZGxgf
In this file, you can ref useful information about performance appraisal in wipro such as performance appraisal in wipro methods, performance appraisal in wipro tips, performance appraisal in wipro forms, performance appraisal in wipro phrases … If you need more assistant for performance appraisal in wipro, please leave your comment at the end of file.
In this file, you can ref useful information about performance appraisal of wipro such as performance appraisal of wipro methods, performance appraisal of wipro tips
With every passing day, organizations are becoming more and more mindful about the performance of their Software Products. However, most of them still on look-out for the basics of Performance Engineering.
According to a recent study by Gartner, fixing performance defects near the end of the development cycle costs 50 to 100 times more than the cost required for fixing it during the early phase of development. Hence, if a product suffers from serious performance issues it can be completely scrapped.
Performance Engineering ensures that your application is performing as per expectations and the software is tested and tuned to meet specified or even the unstated performance requirements.
We present you with a webcast on Performance Engineering Basics that would walk you through the elements and process of performance engineering, and also offers a methodical process for the same.
It also offers details on a load testing tool, and describes how best to utilize it.
Visit http: http://www.impetus.com/featured_webcast?eventid=10 to listen to the entire webcast (20 minutes).
OR
To post any queries on Performance Engineering, write to us at isales@impetus.com
For case studies and articles on performance engineering please visit: http://www.impetus.com/plabs/casestudies?case_study=&pLabsClustering.pdf=
An Introduction to Software Performance EngineeringCorrelsense
Software performance engineering is becoming increasingly important to businesses as they look to improve the non-functional performance of applications and get more out of IT investments. By leveraging performance engineering techniques, IT professionals can be indispensable in building and optimizing scalable systems. This
introductory course will teach you the essentials of software
performance engineering including :
• The performance challenges faced by Enterprise IT today
• What is software performance engineering (SPE)?
• Best practices for building scalable software systems
• The approaches to integrating SPE into IT project lifecycles
• Common frameworks for measuring application performance and service levels
• The impact of SPE on software developers, testers, capacity planes,
and other IT professionals
• Case studies from the finance, retail, and insurance industries
Instructor: Walter Kuketz, SVP and CTO, Collaborative Consulting
This training is sponsored by Correlsense, Collaborative Consulting,
and New Horizons
EMC Documentum xCP 2.x Tips for application migration v1.1Haytham Ghandour
This document addresses some of the
common problems faced during Application
migration. It covers other related topics like
Type Adoption, Import types from Composer
Projects, Interoperability and Reverse
Interoperability.
Application Performance: 6 Steps to Enhance Performance of Critical SystemsCAST
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
This white paper presents a six-step Application Performance
Modeling Process using software intelligence to identify potential performance issues earlier in the development lifecycle. Enriching dynamic testing with structural quality analysis gives ADM teams insight into the performance behavior of applications by highlighting critical application performance issues, especially when combined with runtime
information.
By adding structural quality analysis, ADM teams learn important information about violations of architectural and programming best practices earlier in the development lifecycle than with a pure dynamic testing approach. Structural quality analysis as part of the performance modeling process allows for fact-based insight into application complexity (e.g. multiple layers, dynamics of their interactions, complexity of SQL, etc.) and allows ADM managers to anticipate evolution of the runtime context (e.g. growing volume of data, higher number of transactions, etc.). The combined approach results in better detection of latent application performance issues within software. Resolving application performance issues early in the development cycle, these alerts help to not only save money but also prevent complete business disruptions.
See more ways to improve application performance: https://www.castsoftware.com/use-cases/Improve-adm-quality
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Assessing application development like the rest of the business
Well overdue, it is time to measure application development and
maintenance the same way as the rest of the business, based on not just how much work someone does, but how well they do the work. As we know, looking to see if the code works as expected is only a single measurement. Knowing how easy it will be to maintain over time, how flexible it is to change as required by business changes, how quickly new team members can understand the code and get working on it and how easily the application can be tested are just some of the things that we need to look at in order to understand the real quality of the work being done by application development teams. When these measurements are combined with ways of counting the productivity (quantity) of development teams, we can get a real understanding of how well the teams are performing and what return is being realized from the investment. These measurements can be assessed both for in-house development organizations as well as the work being done by outsourcers.
The applications delivered by IT are a significant differentiator between competitors and therefore it needs to be managed as a core business process. Held up against corporate standards and no matter how or where the development work is done, it must be done well and the resulting applications need to be able to withstand time.
See how to Assess Your Application: https://www.castsoftware.com/use-cases/application-assessment
Six steps-to-enhance-performance-of-critical-systemsCAST
To view more ways to improve application performance: https://bit.ly/2OZGxgf
This white paper presents a six-step Application Performance
Application Development and Maintenance (ADM) teams often face performance issues in applications during the testing phase when an application is almost complete which results in delays and business loss. The performance modeling process using software Intelligence to identify and eliminate performance flaws before they reach to production level.
By adding dynamic performance testing with automated structural quality analysis, ADM team get early and important information that might be missed with a pure dynamic approach such as inefficient loops or SQL queries and improve the development lifecycle. The combined approach will result in detection of performance issues within the application software.
This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues at the earlier stage in the development lifecycle which results in reducing the cost but also intercept business from any kind of downfall.
This white paper helps to understand different approaches of structural quality analysis and illustrate the modeling process at work.
To view more ways to improve application performance: https://bit.ly/2OZGxgf
In this file, you can ref useful information about performance appraisal in wipro such as performance appraisal in wipro methods, performance appraisal in wipro tips, performance appraisal in wipro forms, performance appraisal in wipro phrases … If you need more assistant for performance appraisal in wipro, please leave your comment at the end of file.
In this file, you can ref useful information about performance appraisal of wipro such as performance appraisal of wipro methods, performance appraisal of wipro tips
Cloud Native Night, April 2018, Mainz: Workshop led by Jörg Schad (@joerg_schad, Technical Community Lead / Developer at Mesosphere)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night/
PLEASE NOTE:
During this workshop, Jörg showed many demos and the audience could participate on their laptops. Unfortunately, we can't provide these demos. Nevertheless, Jörg's slides give a deep dive into the topic.
DETAILS ABOUT THE WORKSHOP:
Kubernetes has been one of the topics in 2017 and will probably remain so in 2018. In this hands-on technical workshop you will learn how best to deploy, operate and scale Kubernetes clusters from one to hundreds of nodes using DC/OS. You will learn how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow and more) on any infrastructure.
This workshop best suits operators focussed on keeping their apps and services up and running in production and developers focussed on quickly delivering internal and customer facing apps into production.
You will learn how to:
- Introduction to Kubernetes and DC/OS (including the differences between both)
- Deploy Kubernetes on DC/OS in a secure, highly available, and fault-tolerant manner
- Solve operational challenges of running a large/multiple Kubernetes cluster
- One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
Moderne Serverless-Computing-Plattformen sind in aller Munde und stellen ein Programmiermodell zur Verfügung, wo sich der Nutzer keine Gedanken mehr über die Administration der Server, Storage, Netzwerk, virtuelle Maschinen, Hochverfügbarkeit und Skalierbarkeit machen brauch, sondern sich auf das Schreiben von eigenen Code konzentriert. Der Code bildet die Geschäftsanforderungen modular in Form von kleinen Funktionspaketen (Functions) ab. Functions sind das Herzstück der Serverless-Computing-Plattform. Sie lesen von der (oft Standard-)Eingabe, tätigen ihre Berechnungen und erzeugen eine Ausgabe. Die zu speichernden Ergebnisse von Funktionen werden in einem permanenten Datastore abgelegt, wie z.B. der Autonomous Database gespeichert. Die Autonomous Database besitzt folgende drei Eigenschaften self-driving, self-repairing und self-securing, die für einen modernen Anwendungsentwicklungsansatz benötigt werden.
IT Professional with experience on multiple platforms in the areas of Enterprise and Application Architecture, Software Engineering, Data Analysis, Configuration Management, Security Analysis, Project Management, Business Analysis, and Technical Writing. Programming languages include Microsoft .NET, C/C++/C#, PERL, and some Java. Database experience includes Microsoft SQL Server 2000-2008R2, Access, and Xbase Languages like FoxPro and Clipper. Web development experience includes SOAP-based Web Services, HTML, CSS, Javascript using third party javascript libraries like Prototype and jQuery as well as the use of Ajax. Operating systems used include various versions of Microsoft Windows, Linux, SCO, Solaris, and AIX.
Enterprise guide to building a Data MeshSion Smith
Making Data Mesh simple, Open Source and available to all; without vendor lock-in, without complex tooling and to use an approach centered around ‘specifications’, existing tools and baking in a ‘domain’ model.
IBM Think Session 8598 Domino and JavaScript Development MasterClassPaul Withers
Session from IBM Think 2018. Note: the architecture used is an extreme case of what's possible (and it could go further), rather than a real-world expectation
For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services . In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platforms provides a wide range of internet services that can be consumed from both on premises environments or the internet.
Session 1: Cloud Services
In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premise application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.
Session 2: The Azure Platform
In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.
1. Details of Previous Projects (at WIPRO Technologies)
Project Name Analysis of DryadLINQ on Windows HPC Server 2008 R2
Work Location Wipro Technologies, Bangalore
Client Microsoft
Operating System Windows HPC Server 2008 R2 SP2
Tools Visual Studio
Language DryadLINQ Beta version, .NET 3.5
Project Description Windows HPC Server 2008 R2 SP2 supports DryadLINQ. Where Dryad is a
general purpose run-time for execution of data parallel applications and
DryadLINQ provides an API that allows creation and execution of data-parallel
compute tasks over the cluster. Dryad utilizes DSC (Distributed Storage
Catalog) for the distributed storage over cluster by maintaining file sets.
Responsibilities As a study and analysis, implemented Back Testing Accelerator Tool (Back
Tacc) and Wikipedia statistics
Project Name Monsoon-API for Apple’s “Account Manager Interface with Cloud.com Cloud
Stack” (Apple Client)
Work Location Wipro Technologies, Bangalore
Client Apple Inc.
Operating System Linux
Tools Eclipse
Language Java
Project Description Monsoon is an Apple’s IAAS (Infrastructure As A Service) Cloud Platform and
provides its users with Infrastructure on Demand. Users can request for
“Virtual Machines, Volumes, Templates” etc, Managers can request for
“Domains & User accounts”, then cloud will provision the requested resource.
Responsibilities Cloud Platform experience:
• Installation of Open Stack’s Nova, UEC on Ubuntu 10.10 server to study the
cloud features
• Installation, maintenance of Cloud.com management server and
troubleshooting the server errors whenever anytime errors are observed at
Cloud Stack UI
• Studied cloud.com code to find the bugs and enhance the features in
cloud.com UI (introduced “VM Details” & “Reports” tab)
• Worked on developing scripts for various tasks (ex: deploy/destroy of “n”
instances, customization of user interface, security based API requests for
Cloud Stack)
• Integrated with “OpenNMS” for the reports from cloud.com Cloud Stack UI
• Studied cookies to integrate Apple’s DS with Cloud Stack for SSO
Performed tasks:
• Studied Cloud Stack APIs definitions and also the cloud.com security to
generate the signature
• Requirements gathering related to input XML from Account Manager
• Implementation and testing of Account manager service (involves XML
request/response parsing), service manager and cloud services.
• Asynchronous calls to perform synchronously
• Data Synchronization between Account Manager and Cloud
2. Project Name Study & Understand Microsoft HPC technology
Work Location Wipro Technologies, Bangalore
Client Microsoft
Operating System Windows
Tools Visual Studio
Language MPI, OpenMP, HPC Power Shell
Project Description As part of the same, working for “Baker Hughes”, an MSS project with HPC
technology to submit their jobs to "Head node" of MS cluster. Later I had been
under the study of HPC technology
Responsibilities • Maintenance support for Baker Hughes Studied Windows HPC Server 2008
server behavior related to High Performance Computing
• MPI (Message Passing Interface) & OpenMP for multi processing
Project Name ICMT (Inter-Company Matching Tool -DHL client)
Work Location Wipro Technologies, Bangalore
Client DHL
Operating System Windows
Tools Visual Studio
Language C#.Net, RUP
Project Description To develop an Inter Company Matching Tool (ICMT) to facilitate the resolution
of unmatched inter-company transactions (Accounting data (S21, UCF, ALP)
with Operations data) to reduce write-offs to P&L due to amounts not billed
to a customer.
Responsibilities Responsible for analyzing the customer requirements, design the ICMT
technical specification, customer interaction and resolving the discrepancies
between design and development.
Project Name Platform Security Implementation (Motorola Client)
Work Location Wipro Technologies, Bangalore
Client Motorola
Operating System Solaris 10
Tools No tools
Language Unix Shell scripting
Project Description Implemented IPSec (Internet Protocol Security), Platform Hardening, SSH &
SFTP protocol security over Common Management platform under Solaris 10
using Bash shell scripting. The package must be initialized as part of post OS
installation.
Responsibilities Responsible for the requirements analysis, design, scripting, UT & BT. And also
responsible for creating/working/closing of SRs & CRs which occur during
Testing in CQCM environment.
Project Name MS Protocol Technical Document Validation – Microsoft Client
Work Location Wipro Technologies, Bangalore
Client Microsoft
Operating System WINDOWS 2003 SERVER, Windows Server 2008, Windows 7
Tools Visual Studio 2008, Spec Explorer, Model Based Testing
Language C#.NET
Project Description Worked for MS-DFSNM (Distributed File System Namespace Management) &
MS-IMSA (IIS IMSAdminBaseW Remote Protocol) protocols technical
document validation, which is Microsoft Preparatory protocol. This process
3. includes 4 phases.
• Study phase: understanding the Technical Document (TD) and filing the
editorial and TD issues any.
• Plan phase: Plan the Test suite
• Design the Test Suite and
• Final implementation and execution. This assignment involved Test Suite
implementation using Spec Explorer, PTF Framework to simulate the server
behavior and can view server behavior as finite state machine diagrams.
I had been selected for 10 days Protocol Review process training held at
Microsoft, Redmond out of 250 team.
Responsibilities Responsible for Requirements Capturing, PQAR (Protocol Quality Assurance
Report) document updates, Module design (using Spec Explorer tool with
Finite state-diagrams), Test suite implementation using ASSERTs, Testing &
debugging, and final captures (using Netmon API).