In version 6, the IBM CE solution added exciting new configuration management capabilities across the lifecycle, better enabling parallel development and strategic reuse. Simply enabling these capabilities won't help you realize their potential; you must consider changes to your process and usage model to achieve results. This presentation describes current considerations, limitations and strategies for adopting configuration management so you can plan a smooth adoption and successfully realize the benefits in your organization.
After you complete this module, you should be able to
explain these concepts:
- How requirements fit in the development process
- Key principles of requirements definition and management
- How you can manage requirements by using IBM Rational
requirements management tools
Think future technologies – corporate presentation (public)Tft Us
Think Future Technologies is a leading provider of outsourcing software development, QA & Testing and related services. Based in India and serving clients worldwide, Think Future Technologies delivers a wide variety of comprehensive end-to-end services that combine power, functionality, and reliability with flexibility, agility, and usability.
Our broad portfolio of service offerings includes software development, user interface design, and architecture planning, as well as quality assurance, implementation, deployment, maintenance, and documentation support. Through the efficient execution of these services, we can create robust, cutting-edge custom technology applications that most effectively address the unique business needs of our customers.
R12.2 is no more a new kid on the block. With its latest release of 12.2.4, it is much more stable and user adoption is increasing day-by-day. Upgrading to R12.2 is on the road map of nearly all Oracle E-business Suite customers and many organizations have already started planning their upgrades. In this session we provide 10 quick tips to consider while you plan this R12.2.4 upgrade.
Basic concepts and terminology for the Requirements Management applicationIBM Rational software
After you complete this module, you should be able to do these tasks :
- Explain the difference between Jazz™ Team Server and the Requirements Management (RM) application
- Describe the basic concepts and terminology in the RM application
- Identify tasks that the team must do before starting a requirements management project with IBM® Rational® DOORS Next Generation or IBM® Rational® Requirements Composer
Interconnect session 3498: Deployment Topologies for Jazz Reporting ServiceRosa Naranjo
This presentation will assist you design deployment topologies for your reporting needs, as well as make decisions on which components are right for you. On the data collection side, it describes both data warehouses (and related ETL mechanisms like DCC ETL and DM ETL) and IBM Lifecycle Query Engine, as well as the Jazz application setup to feed into LQE. On the reporting side, there is focus on Jazz Reporting Service Report Builder, but also the options when Report Builder is not enough; e.g., Cognos integration through the ALM Cognos Connector. The session will also cover various questions on how to group those applications/servers, hardware needs, etc.
After you complete this module, you should be able to
explain these concepts:
- How requirements fit in the development process
- Key principles of requirements definition and management
- How you can manage requirements by using IBM Rational
requirements management tools
Think future technologies – corporate presentation (public)Tft Us
Think Future Technologies is a leading provider of outsourcing software development, QA & Testing and related services. Based in India and serving clients worldwide, Think Future Technologies delivers a wide variety of comprehensive end-to-end services that combine power, functionality, and reliability with flexibility, agility, and usability.
Our broad portfolio of service offerings includes software development, user interface design, and architecture planning, as well as quality assurance, implementation, deployment, maintenance, and documentation support. Through the efficient execution of these services, we can create robust, cutting-edge custom technology applications that most effectively address the unique business needs of our customers.
R12.2 is no more a new kid on the block. With its latest release of 12.2.4, it is much more stable and user adoption is increasing day-by-day. Upgrading to R12.2 is on the road map of nearly all Oracle E-business Suite customers and many organizations have already started planning their upgrades. In this session we provide 10 quick tips to consider while you plan this R12.2.4 upgrade.
Basic concepts and terminology for the Requirements Management applicationIBM Rational software
After you complete this module, you should be able to do these tasks :
- Explain the difference between Jazz™ Team Server and the Requirements Management (RM) application
- Describe the basic concepts and terminology in the RM application
- Identify tasks that the team must do before starting a requirements management project with IBM® Rational® DOORS Next Generation or IBM® Rational® Requirements Composer
Interconnect session 3498: Deployment Topologies for Jazz Reporting ServiceRosa Naranjo
This presentation will assist you design deployment topologies for your reporting needs, as well as make decisions on which components are right for you. On the data collection side, it describes both data warehouses (and related ETL mechanisms like DCC ETL and DM ETL) and IBM Lifecycle Query Engine, as well as the Jazz application setup to feed into LQE. On the reporting side, there is focus on Jazz Reporting Service Report Builder, but also the options when Report Builder is not enough; e.g., Cognos integration through the ALM Cognos Connector. The session will also cover various questions on how to group those applications/servers, hardware needs, etc.
Config Management Camp 2017 - If it moves, give it a pipelineMark Rendell
A talk dedicated to improving the quality of infrastructure code using free open source software. We used to say "if it moves, lock it down in version control" and then the concept of a Continuous Delivery pipeline came along and the advice progressed to "if it moves, lock it down in version control and build a Continuous Delivery pipeline to test and release every change continuously". This advice is still more commonly followed in application code than infrastructure and platform code. I will talk about how we have seasoned this dogfood by making CD of infrastructure code easier. The solution “ADOP” is free and open source and currently makes building a pipeline for a Chef Cookbook, Ansible Playbook or Docker image almost trivial. I will describe and demo the solution including how to adopt it, where I think it is going next, and how valuable we have found it.
http://cfgmgmtcamp.eu/schedule/testing/mark-rendell.html
To put the future of the Internet of Things into prospective, we've compiled a list of the industry's latest trends and statistics.
To learn how CloudOne helps the world's best companies make their things for the Internet of Things, visit www.oncloudone.com.
This presentation was made by Salesforce.com, inc. (Release Readiness Team).
This a short (only ~270 slides) summary of the features developed.
For more info please check:
https://releasenotes.docs.salesforce.com/en-us/spring17/release-notes/salesforce_release_notes.htm
Did you know that you can develop awesome products with zero product specifications ? We have recently quantified the gains for a product we built using Lean Startup and MVP approach and were pleasantly surprised to find that we could quantify minimum 47% gain in time-to-market, 32% cost savings, 55% improvement in product quality and 40% gain in business value as compared to traditional product development methods.
ClearCase Version Importer - a migration tool to Rational Team Concert SCMIBM Rational software
A new, simpler tool for importing ClearCase version history into Rational Team Concert (RTC) was introduced in RTC 4.0.5. This is a new stand-alone tool that does not require ClearCase synchronization to be set up to use it. This presentation will first provide an overview of the difference between ClearCase and RTC SCM, then talk about the new migration tool and its enhanced capability in upcoming RTC release. You will understand how the tool can help you migrating the data successfully and it concludes with a live demo of the migration tool. Watch the presentation on YouTube: http://ow.ly/uvBSX
Cognos Analytics Release 6: March 2017 EnhancementsSenturus
Cognos Analytics Release 6 went live on March 17, 2017. This webinar recording walks through the portal, dashboard and reporting feature enhancements. View the webinar recording at: http://www.senturus.com/resources/cognos-analytics-march-2017-enhancements/.
Nic Leduc from the IBM product team describes and demonstrates portal, dashboard and reporting enhancements. PORTAL: allows more flexibility to convert Cognos BI 10 portal pages, create report views, shortcuts and more. DASHBOARD: enhanced mapping plus now supports direct access to OLAP packages. REPORTING: allows access to queries from data modules in addition to the many enhancements to the interactive viewer.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation.
Migrations of existing enterprise applications to the cloud can be complex. There are no migration methodologies or magic bullets that enable a simple lift and shift or automated migration. Typical migration projects take a great deal of discovery work, re-architecture, and refactoring. In this session, we will share known challenges and considerations that must be accounted for when designing, planning, and executing a migration. Topics will include: scale-out and distributed architectures, geographic dispersion, leveraging existing cloud services, and logging & monitoring. In addition, this session will address how in-depth discovery efforts can be paired with configuration management, automation, and source control to minimize the risk of future technical debt. Finally, we’ll cover the business and technical factors the affect the complexity of application refactoring.
Migrations of existing enterprise applications to the cloud can be complex. There are no migration methodologies or magic bullets that enable a simple lift and shift or automated migration. Typical migration projects take a great deal of discovery work, re-architecture, and refactoring. In this session, we will share known challenges and considerations that must be accounted for when designing, planning, and executing a migration. Topics will include: scale-out and distributed architectures, geographic dispersion, leveraging existing cloud services, and logging & monitoring. In addition, this session will address how in-depth discovery efforts can be paired with configuration management, automation, and source control to minimize the risk of future technical debt. Finally, we’ll cover the business and technical factors the affect the complexity of application refactoring.
Speaker: Kim Woodbury, IBM
Overview: This session will provide insights into the strategic direction for Maximo, and the product roadmap. There will be a focus on the next years deliverables across the portfolio, including areas such as mobility and business intelligence. Also learn how to better interact with support and the tools and resources that are available as you look to troubleshoot and maintain Maximo.
IBM Cloud University 2017-IDPA009-IBM BPM Upgrade and Migration Made EasyBrian Petrini
Upgrading to the latest version of IBM Business Process Manager (BPM) has never been easier. Ever since the release of IBM BPM 8500 in 2013, customers has been able to move to the latest release with an in-place upgrade without the need for data migration. This session will discuss the top practices in planning a painless upgrade to the latest BPM continuous release version - whether you are running BPM 85x or running an older version. We will also discuss the options available if you want to move your BPM program to the Cloud. In addition, we will also discuss ways to design your applications to ensure an easy upgrade every time.
InterConnect 2017 HBP-2884-IBM BPM upgrade and migration made easyBrian Petrini
Upgrading to the latest version of IBM BPM has never been easier. Ever since the release of IBM BPM 8500 in 2013, customers has been able to move to the latest release with an in-place upgrade without the need for data migration. This session will discuss the top practices in planning a painless upgrade to the latest BPM continuous release version?whether you are running BPM 85x or an older version. We will also discuss the options available if you want to move your BPM program to the cloud. In addition, we will also discuss ways to design your applications to ensure an easy upgrade every time.
Config Management Camp 2017 - If it moves, give it a pipelineMark Rendell
A talk dedicated to improving the quality of infrastructure code using free open source software. We used to say "if it moves, lock it down in version control" and then the concept of a Continuous Delivery pipeline came along and the advice progressed to "if it moves, lock it down in version control and build a Continuous Delivery pipeline to test and release every change continuously". This advice is still more commonly followed in application code than infrastructure and platform code. I will talk about how we have seasoned this dogfood by making CD of infrastructure code easier. The solution “ADOP” is free and open source and currently makes building a pipeline for a Chef Cookbook, Ansible Playbook or Docker image almost trivial. I will describe and demo the solution including how to adopt it, where I think it is going next, and how valuable we have found it.
http://cfgmgmtcamp.eu/schedule/testing/mark-rendell.html
To put the future of the Internet of Things into prospective, we've compiled a list of the industry's latest trends and statistics.
To learn how CloudOne helps the world's best companies make their things for the Internet of Things, visit www.oncloudone.com.
This presentation was made by Salesforce.com, inc. (Release Readiness Team).
This a short (only ~270 slides) summary of the features developed.
For more info please check:
https://releasenotes.docs.salesforce.com/en-us/spring17/release-notes/salesforce_release_notes.htm
Did you know that you can develop awesome products with zero product specifications ? We have recently quantified the gains for a product we built using Lean Startup and MVP approach and were pleasantly surprised to find that we could quantify minimum 47% gain in time-to-market, 32% cost savings, 55% improvement in product quality and 40% gain in business value as compared to traditional product development methods.
ClearCase Version Importer - a migration tool to Rational Team Concert SCMIBM Rational software
A new, simpler tool for importing ClearCase version history into Rational Team Concert (RTC) was introduced in RTC 4.0.5. This is a new stand-alone tool that does not require ClearCase synchronization to be set up to use it. This presentation will first provide an overview of the difference between ClearCase and RTC SCM, then talk about the new migration tool and its enhanced capability in upcoming RTC release. You will understand how the tool can help you migrating the data successfully and it concludes with a live demo of the migration tool. Watch the presentation on YouTube: http://ow.ly/uvBSX
Cognos Analytics Release 6: March 2017 EnhancementsSenturus
Cognos Analytics Release 6 went live on March 17, 2017. This webinar recording walks through the portal, dashboard and reporting feature enhancements. View the webinar recording at: http://www.senturus.com/resources/cognos-analytics-march-2017-enhancements/.
Nic Leduc from the IBM product team describes and demonstrates portal, dashboard and reporting enhancements. PORTAL: allows more flexibility to convert Cognos BI 10 portal pages, create report views, shortcuts and more. DASHBOARD: enhanced mapping plus now supports direct access to OLAP packages. REPORTING: allows access to queries from data modules in addition to the many enhancements to the interactive viewer.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
SaaS architectures can be deployed onto AWS in a number of ways, and each optimizes for different factors from security to cost optimization. Come learn more about common deployment models used on AWS for SaaS architectures and how each of those models are tuned for customer specific needs. We will also review options and tradeoffs for common SaaS architectures, including cost optimization, resource optimization, performance optimization, and security and data isolation.
Migrations of existing enterprise applications to the cloud can be complex. There are no migration methodologies or magic bullets that enable a simple lift and shift or automated migration. Typical migration projects take a great deal of discovery work, re-architecture, and refactoring. In this session, we will share known challenges and considerations that must be accounted for when designing, planning, and executing a migration. Topics will include: scale-out and distributed architectures, geographic dispersion, leveraging existing cloud services, and logging & monitoring. In addition, this session will address how in-depth discovery efforts can be paired with configuration management, automation, and source control to minimize the risk of future technical debt. Finally, we’ll cover the business and technical factors the affect the complexity of application refactoring.
Migrations of existing enterprise applications to the cloud can be complex. There are no migration methodologies or magic bullets that enable a simple lift and shift or automated migration. Typical migration projects take a great deal of discovery work, re-architecture, and refactoring. In this session, we will share known challenges and considerations that must be accounted for when designing, planning, and executing a migration. Topics will include: scale-out and distributed architectures, geographic dispersion, leveraging existing cloud services, and logging & monitoring. In addition, this session will address how in-depth discovery efforts can be paired with configuration management, automation, and source control to minimize the risk of future technical debt. Finally, we’ll cover the business and technical factors the affect the complexity of application refactoring.
Speaker: Kim Woodbury, IBM
Overview: This session will provide insights into the strategic direction for Maximo, and the product roadmap. There will be a focus on the next years deliverables across the portfolio, including areas such as mobility and business intelligence. Also learn how to better interact with support and the tools and resources that are available as you look to troubleshoot and maintain Maximo.
IBM Cloud University 2017-IDPA009-IBM BPM Upgrade and Migration Made EasyBrian Petrini
Upgrading to the latest version of IBM Business Process Manager (BPM) has never been easier. Ever since the release of IBM BPM 8500 in 2013, customers has been able to move to the latest release with an in-place upgrade without the need for data migration. This session will discuss the top practices in planning a painless upgrade to the latest BPM continuous release version - whether you are running BPM 85x or running an older version. We will also discuss the options available if you want to move your BPM program to the Cloud. In addition, we will also discuss ways to design your applications to ensure an easy upgrade every time.
InterConnect 2017 HBP-2884-IBM BPM upgrade and migration made easyBrian Petrini
Upgrading to the latest version of IBM BPM has never been easier. Ever since the release of IBM BPM 8500 in 2013, customers has been able to move to the latest release with an in-place upgrade without the need for data migration. This session will discuss the top practices in planning a painless upgrade to the latest BPM continuous release version?whether you are running BPM 85x or an older version. We will also discuss the options available if you want to move your BPM program to the cloud. In addition, we will also discuss ways to design your applications to ensure an easy upgrade every time.
Webinar December 2018 - Planning Analytics Workspace (PAW) Tips & Tricks. Today’s webinar is part of an advanced webinar series offered by QueBIT. Our next webinar is scheduled for Thursday, January 10th at 2pm Eastern. Learn about the advancements in the Cognos Analytics 11.1 release. These changes will bring the power of artificial intelligence, machine learning, and advanced analytics to all Cognos Analytics users to empower, enlighten, and facilitate a new breed of boundless data explorers! Register today by accessing the Events page on our website at quebit.com/news-events.
OOW16 - Planning Your Upgrade to Oracle E-Business Suite 12.2 [CON1423]vasuballa
This session discusses key upgrade planning considerations, combining lessons learned from customers with practical advice from Oracle Support, Oracle Consulting, and Oracle’s development organization. Understand how to build the business case, identify needed time and resources, prepare business and IT staff for changes, plan for required system changes, create an effective test strategy, and more.
OOW16 - Planning Your Upgrade to Oracle E-Business Suite 12.2 [CON1423]vasuballa
This session discusses key upgrade planning considerations, combining lessons learned from customers with practical advice from Oracle Support, Oracle Consulting, and Oracle’s development organization. Understand how to build the business case, identify needed time and resources, prepare business and IT staff for changes, plan for required system changes, create an effective test strategy, and more.
Open Mic to discuss the new features related to Portal and Web Content Management introduced in version 8.5. We will be covering changes related to themes,
mobile, social integration and WCM changes related to syndication and rich media aspects of the new release.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
13. 13
Sports model convertible
Sports model coupe
Sports Model
Car X Model Time
GL Model
Putting it all together:
Speeding up delivery of highly customized Innovation
Work in a development stream that spans tools
Baseline across tools
Branch to create new variants or releases
Compare across configs
Control delivery of changes across configs
Reuse without copying, making updates and impact analysis much easier
Function
Stream
Baseline
= Baseline
= Branch
= Artifact propagation
Requirements
Architecture
Test
Implementation
Car X Model
Power Train X
GearBox X Engine X
Body X
Requirements
Architecture
Test
Implementation
Requirements
Architecture
Test
Implementation
There’s also a lot of information available in the Appendices:
Architecture changes v5.x to v6.x
Additional considerations and behavior changes
One of the most familiar images of the Jazz toolset was the linked lifecycle data iteratively across the engineering lifecycle of design, requirements, test and source code.
This diagram shows what can be relatively simple if you link data, but even this level of complexity would prove a challenge for synchronizers if you were to copy updates between tools
Let’s zoom in and consider a single test case that validates a requirement
which itself satisfies a higher-level requirement.
An engineer make edits to the requirements and test cases.
Now we have ambiguity:
which version of the test case validates which version of the requirement?
The engineers then make further edits to the tests
Engineers don’t want to manually manipulate the links between artifacts – that’s too much overhead, and it’s too easy to make mistakes.
Ideally the system would do this for us. {After all, computers are good at keeping track of details like this – and people are not.}
Let’s look at how the IBM solution, based on the OSLC specification for configuration management, solves this.
Each tool is responsible for its own configurations (streams and baselines). This provides a way of saying “exactly these artifacts at exactly these versions”
But it doesn’t solve the question of data outside the scope of the tool’s responsibility – in particular, links to data in other tools.
In the past teams have tried to work around this by keeping track outside the tool:
For example, “Baseline X in the requirement tool is related to Baseline Y in the SCM …”
We really want the system to keep track of the details. The information system of record should be the information that drives the tools.
Global configurations provide a context beyond single tools.
They can include baselines or streams from other tools, and indicate which baselines or streams belong together to make up a particular version or variant of the overall solution.
The global configuration also provides the context for resolving links between artifacts.
Links work differently in this configuration-aware environment.
Instead of pointing to a specific version of an artifact, they point to the artifact in general. We call this a “concept link”.
The requester of an artifact at the other end of a link provides the link (as they did in CLM V5), and then also a configuration.
With the additional configuration context, the tool can resolve the link to a specific version of the requested artifact.
You can clearly see then, that if RM and QM projects are linked, they must both (or neither) support configuration management to ensure that links resolve in the correct context.
Within the global configuration component, streams depict the development effort, and group the related configuration across the different domains, whether they are editable streams, or frozen baselines.
Here we see that the global configuration stream for this component, consists of baselined requirements, but streams of QM, DM, and SCM, meaning that the team is still working on those areas.
You can also see an existing global baseline for this component, that includes baselines from each of the local configurations.
Notice that not every baseline is included in a global configuration, and the streams and baselines from the different tools may not be identical or symmetrical – that’s why the global configuration is so valuable, defining the relationships so that the engineer or contributor just has to know the global configuration, and not all the details of which local application configurations it includes.
Contrast this slide with the Excel one from before.
Each component is self-contained with it’s own requirements, tests and designs.
At a glance, it is much more obvious of the context between the components are obvious at a glance across each component project across the configuration.
Just simple read of slide before showing the GUI shot of the same thing
Following through, you see the components listed Your components are structured in a logical way, showing you the contributing tools’ components
Linking through initially from the Global Configuration for AMR
To the streams of the AMR representing the different variants of AMR
Then showing how each component is a collection of contributions from RM, QM and DM, put together into it’s own global configuration/logical superset
There are many benefits to adopting configuration management.
It is a given now for managing source code.
Many companies are facing increasing complexity and variation in their product lines based on geography, specific customer demands, and other factors. Many need to do parallel development, sometimes evolving requirements for their next release before work has completed on the current release.
Many have tried to manage this with manual processes, which are error-prone and time-consuming.
These new capabilities put the heavy lifting on the applications to manage which versions of artifacts belong together, and make up the correct product variant or version, removing the manual work and burden.
This reduces errors and cost, and improves speed of delivery.
Reuse also increases speed, reduces effort and cost, and can increase quality as well.
When you have an established baseline of artifacts, you can reliably recreate that product release to modify or evolve it, and be confident in the state of the artifacts and your starting point of quality.
We also encourage customers to discuss their needs, questions, and intentions for Configuration Management with their IBM client reps, lab advocates, ULL, and other knowledgeable consultants.
As mentioned before, there are significant architectural changes in v6 to support configuration management; previous versions of CLM applications don’t know how to work with configurations and versioned artifacts. And as we continue to expand capabilities across the platform (e.g. new link validity service), the CLM apps will need to be at a consistent level to take advantage of those capabilities.
When a configuration-enabled project links to a pre-v6 project: links between the projects are frozen and can’t be changed; you can’t create new links between them; and links to the enabled project always resolve to the default configuration. That may be acceptable during a transition, but isn’t viable for long-term operation. You need a plan to upgrade everything to v6.
In RTC rich clients prior to v6, links to versioned artifacts will resolve to the default configuration. If you must use pre-v6 clients, you’ll need to use the web client to create and navigate versioned links.
If RM or QM projects link to other RM or QM projects, you should either enable all or none of the linked projects. If you enable only one of the linked projects, the behavior is the same as mixing versions (as described above), and is not viable in the longer term.
When you enable RM and QM projects, they stop feeding data to the DW, the existing data is archived, and you won’t be able to access any BIRT or RRDI reports for them. Also, any dashboard widgets that use report resources will stop working, although you will still see them in the widget catalog.
The projects feed data to the “Lifecycle Query Engine using Configurations” data source, and that is the data source to use in JRS Report Builder for all configuration-based reporting.
RTC still feeds the DW as usual, and the RTC reports and widgets continue to work – unless they include any configuration data (like links to versioned artifacts), in which case that data isn’t in the DW, it’s in the “LQE using Configurations” data source.
As mentioned before, there are significant architectural changes in v6 to support configuration management; previous versions of CLM applications don’t know how to work with configurations and versioned artifacts. And as we continue to expand capabilities across the platform (e.g. new link validity service), the CLM apps will need to be at a consistent level to take advantage of those capabilities.
When a configuration-enabled project links to a pre-v6 project: links between the projects are frozen and can’t be changed; you can’t create new links between them; and links to the enabled project always resolve to the default configuration. That may be acceptable during a transition, but isn’t viable for long-term operation. You need a plan to upgrade everything to v6.
In RTC rich clients prior to v6, links to versioned artifacts will resolve to the default configuration. If you must use pre-v6 clients, you’ll need to use the web client to create and navigate versioned links.
If RM or QM projects link to other RM or QM projects, you should either enable all or none of the linked projects. If you enable only one of the linked projects, the behavior is the same as mixing versions (as described above), and is not viable in the longer term.
When you enable RM and QM projects, they stop feeding data to the DW, the existing data is archived, and you won’t be able to access any BIRT or RRDI reports for them. Also, any dashboard widgets that use report resources will stop working, although you will still see them in the widget catalog.
The projects feed data to the “Lifecycle Query Engine using Configurations” data source, and that is the data source to use in JRS Report Builder for all configuration-based reporting.
RTC still feeds the DW as usual, and the RTC reports and widgets continue to work – unless they include any configuration data (like links to versioned artifacts), in which case that data isn’t in the DW, it’s in the “LQE using Configurations” data source.
Key Reporting plan items: 357440 - [ CLM] Version Aware Reporting across the ALM Solution (tracks multiple lower-level PIs)
LQE doesn’t yet support historical data, metrics, or trending reports for configuration-aware projects – initial metrics/trend reporting is expected in 6.0.3, not yet at parity with DW support
The configuration-aware RM, QM, and DM dashboard widgets reflect your current configuration context by default, but you can specify (and save) a different configuration in the widget’s settings.
Some data is not yet available for version-aware reporting (in some cases, it’s not in the DW either):
No metrics or trending
Missing: DNG views; some QM lab management resources; RTC build
Note also that neither the DW nor LQE includes data on SCM, plan resources, WI comments; DNG change sets, reviews, and module hierarchy
Combining RTC WIs and config-aware /versioned artifacts: while you use a configuration to filter the versioned artifacts (from DNG and RQM), the RTC WIs do not have any property that specifies the configuration. You need to filter the WIs based on the appropriate planned-for or found-in attribute (depending on the artifact type) that maps to the Release associated to the configuration.
Sample reports for LQE are available on jazz.net. Note that these samples require that the JKE Money that Matters sample exists when you import them. Suggest you import into a test system with this sample, modify as needed for your data model, then export and import into your production environment.
The JRS Report Builder OOTB reports are currently all built on the DW data source. You have to write your own for the LQE data source.
OOTB reports (including GC): PLE and configuration aware OOTB Reports (tracked by 357440)
Here are the list of links which WILL be supported in either direction:
http://purl.org/dc/terms/references
http://open-services.net/ns/rm#affectedBy
http://open-services.net/ns/rm#implementedBy
http://open-services.net/ns/rm#trackedBy
http://open-services.net/ns/rm#validatedBy
http://open-services.net/ns/rm#elaborates
http://open-services.net/ns/rm#elaboratedBy
http://open-services.net/ns/rm#specifies
http://open-services.net/ns/rm#specifiedBy
The following DM links are added but unofficially supported
http://www.ibm.com/xmlns/rdm/types/ArtifactTermReferenceLink
http://www.ibm.com/xmlns/rdm/types/Decomposition
http://www.ibm.com/xmlns/rdm/types/Embedding
http://www.ibm.com/xmlns/rdm/types/Extraction
http://www.ibm.com/xmlns/rdm/types/Link
http://www.ibm.com/xmlns/rdm/types/SynonymLink
Changesets - Need agreement on representation of changesets at OSLC level before can implement in tools
Version Types in GCM – believe the latest state of the type system is published to TRS as QM does. Work for most scenarios (can’t report on older instance data that may reflect
The OSLC Configuration Management spec defines how to handle versioned artifacts. Until they add support for this spec, applications don’t include versioning info or know how to process it. Existing OSLC or URL links to RM, QM, or DM artifacts from external applications, documents, or web pages will resolve to the default configuration.
RTC work items aren’t versioned, so integrations with WIs continue to work as expected.
So this effectively means that integrations between RM and QM and DOORS, CQ, TaskTop, HPQC, etc will not work correctly, until the other applications support the Configuration Management spec.
RQM test execution adapters that have been verified include National Instruments Test Integration Adapter, MicroGenesis CANoe plus several other IBM provided adapters.
There have been discussions with IBM products as well as third-party application vendors. Progress is occurring, but nothing to announce yet.. Expect some announcements in the new year.
Link validity doesn’t yet support the ability for multiple “profiles” . It is based on specific properties, and you can’t change which those are (yet).
Reqts reconciliation – this is really more of an awareness thing. Because link validity operates on specific properties, just because an artifact was updated doesn’t mean its validity changed. Now that QM is using validity for reconciliation, the reconciliation won’t indicate whether something has been updated, but rather only if its validity status changed.
Applying filters based on lifecycle links doesn’t work well. In RM, you can’t select to filter a view based on validated-by, affected-by, tracked-by, or implemented-by links – those options just aren’t there, nor is the “Limit by lifecycle status” option. In QM list views (test plans/cases/ERs), you can’t reliably filter artifacts by links to devt items or plans.
RM: CM: Filters: Support filtering by lifecycle status on opt-in projects
QM: 141652 Additional cross artifacts views and filtering options
QM mobile app targeted for 603
RDM and RTC already supported configuration management and require no further steps to enable that capability.
NOTE: In 602, RDM does include its own key under the covers; that is to ensure that linking resolves correctly. It does not impact any other DM capabilities related to configuration management, and nothing needs to be done at the project-area level. For 6.0 and 6.0.1 installs, the CLM administrator should add their key into the DM server properties to address the link resolution issue.
DNG and RQM have some basic native capabilities related to baselines/snapshots and change management, but if you want to have multiple streams and baselines, with versioned artifacts, and combine them into global configurations, you need to explicitly enable Configuration Management in each of those tools.
Enabling configuration management in both DNG and RQM is a two step process. We wanted an explicit/overt action to enable it because once you do so for a project, there is no going back. The decision to do so must be carefully considered which we’ll cover later.
Customers much first activate the configuration management capabilities on the DNG and RQM servers. This is done through the use of an activation key. This is not a license and all customers are entitled to it. The key is required in eval as well as production environments.
There are two ways to obtain the key. If just evaluating the capability, we ask customers to go to a new self serve page where they will read through the primary considerations/tradeoffs, acknowledge they’ve done so, then click a button to generate the key. If a customer wants to use configuration management in production, we ask them to contact IBM Support, who will have a more detailed, guided discussion with them to ensure they have thought through all the considerations before generating the key on their behalf and providing it to them.
Once configuration management is enabled for an application, each RQM or DNG project area that is to use configuration management, must be enabled to do so. This is covered by a property of the project area. After turning on configuration management for a project, it will now be able to create multiple streams, baselines and participate in Global Configurations.
It is important to reemphasize that once a project is enabled for configuration management, it cannot be disabled or turned off.
Dashboard widgets that no longer work still remain in the widget catalog
re historical reporting – if they were extracting data from the CLM DW to a 3rd-party data repo (e.g. Insight), they could still report on the historical data from there. Which suggests they should ensure any such ETL is done before they enable Configuration Management
See Nick Crossley’s presentation on the May 13 beta call for a detailed discussion on the updated linking model
https://www-304.ibm.com/connections/wikis/home?lang=en-us#!/wiki/Wbe4c0c5c6f6a_4677_9601_3176a79d2c32/page/CLM%20Beta%20Meetings
Every customer adopting 6.0 will need to upgrade/migrate regardless of their use of configuration management or not. Development made a concerted effort to ensure that there would be no performance degradation when configuration management is not enabled.
It is important to point out that there are factors that will impact the upgrade time.
For RQM it is the size of the repository and latency (between app server and db server).
For DNG, migration times largely depend on the number of requirements, comments and links. The number of baselines and reviews will add additional cost. Network latency is also a factor, having a linear impact on upgrade time.
See https://jazz.net/wiki/bin/view/Deployment/CollaborativeLifecycleManagementPerformanceReportRDNG60Migration
Having a well-defined process, implemented as much as possible in the tools, is key. Pilots provide a way to test out and evolve your process, as well as implement reports and create training materials (and trainers). Select projects carefully, considering cross-project relationships, mission-criticality, time size/skills, project size/scope… Not all projects need to adopt configuration management – unless they link to other projects that do need it.
Once you have decided that this is worth pursuing – you need to invest some time in determining how your organization can leverage these capabilities in the best way for you.
Assuming your pilot goes well and you’re satisfied that your usage model and processes work as desired, you need to consider production rollout.
Sandbox 602 and 603 milestone drivers have GCM enabled. They may not be set up for all capabilities, e.g. reporting.
Also… (next slide)
You can come to our ICE conference to learn more about configuration management, what other clients are doing or planning around these capabilities, and more topics besides.
You can only create reports using Report Builder with an LQE data source:
To report on a particular global, RM, or QM configuration: Use the "Lifecycle Query Engine with Configurations" data source
To report on artifacts in all configurations in an RM or QM configuration-enabled project area: Use the "Lifecycle Query Engine" data source.
DNG limitations:
For custom link types, you need to report from the perspective of the artifact that “owns” the link (best practice is the upstream artifact). This is because there are no back-links, and JRS cannot determine the “reverse-direction” for custom links (it is able to do this for known system link types).
If you define or change custom attributes for a DNG data type, the changes must also be included in the default configuration (initial stream) to be included in reports. Alternatively, you can use custom SPARQL to report on data with type changes not in the initial stream.
When reporting across DNG projects, attributes from a similar type of DNG artifact will appear duplicated for each project included in the scope of the report. To include that attribute for multiple projects, you must select each attribute for the corresponding projects from the ‘Format Results’ section in Report Builder. To use these attributes for filtering, you must set your condition for each attribute appearing as duplicate from their corresponding project area.
Combining RTC WIs and config-aware /versioned artifacts: while you use a configuration to filter the versioned artifacts (from DNG and RQM), the RTC WIs do not have any property that specifies the configuration. You need to filter the WIs based on the appropriate planned-for or found-in attribute (depending on the artifact type) that maps to the Release associated to the configuration.
The JRS Report Builder OOTB reports are currently all built on the DW data source. You have to write your own for the LQE data source.
OOTB reports (including GC): PLE and configuration aware OOTB Reports (tracked by 357440)
Here are the list of links which WILL be supported in either direction:
http://purl.org/dc/terms/references
http://open-services.net/ns/rm#affectedBy
http://open-services.net/ns/rm#implementedBy
http://open-services.net/ns/rm#trackedBy
http://open-services.net/ns/rm#validatedBy
http://open-services.net/ns/rm#elaborates
http://open-services.net/ns/rm#elaboratedBy
http://open-services.net/ns/rm#specifies
http://open-services.net/ns/rm#specifiedBy
The following DM links are added but unofficially supported
http://www.ibm.com/xmlns/rdm/types/ArtifactTermReferenceLink
http://www.ibm.com/xmlns/rdm/types/Decomposition
http://www.ibm.com/xmlns/rdm/types/Embedding
http://www.ibm.com/xmlns/rdm/types/Extraction
http://www.ibm.com/xmlns/rdm/types/Link
http://www.ibm.com/xmlns/rdm/types/SynonymLink
Tracking/planning artifacts aren’t truly part of the GC, although they can participate in the GC context by mapping the Release to the GC. WIs aren’t versioned, which is why they continue to operate pretty much the same way in terms of external links and reporting. (Note that the OASIS OSLC Configuration Management spec recommends that you do not version OSLC ChangeRequests, which is what WIs are.) Plans and plan snapshots aren’t part of a GC contribution. The only actual contributions that RTC makes to GCs are from SCM.
Need to be aware that QM doesn’t have change sets like the other two. You could create a separate QM stream to have it function as a temporary set of changes to merge back into the main stream, to simulate a kind of change set. No association between change sets in different tools (e.g. this DNG change set goes with this set of RQM changes) – must be manually handled via e.g. naming conventions. There is no automation for QM and Personal Streams.
136904: [QM] Support Change Sets
Delivering changes:
DNG delivers across the stream hierarchy – so if you have deep branches and all levels need a change(s), it could take a while to deliver changes from the tip of one to the top, and/or up and then down again to a different branch. Stream strategy matters. In DNG, you can either deliver (push) changes starting from the source of change, or accept (pull) changes starting from the stream to be updated (target).
In RQM, you have to merge changes from a baseline (not a stream). You start the merge operation from the stream that you want to update (the target), not the source.
Note that RQM compare/merge capabilities are also available in opted-out projects to resolve conflicting save operations.
After you deliver a DNG change-set, you can’t access any information about its contents .
Additional capabilities and automation in 601 has been added for DNG change sets:
You can require that any changes be made in a change set. Defined at the level of a stream.
You can require that change sets be linked to an approved CR (any OSLC change request). If this condition is not met, you can’t deliver the changes. Defined at the level of a stream.
When you create a change set from a GC context, you automatically get a Personal Stream with the change sets included, and are put in that context. When you discard or deliver the changes, the change set is removed from your PS, and you are returned to the original GC context. The PS remains to be used the next time you create a change set.
Note that your PS can only include one change set for that RM configuration. If you create a 2nd change set, it will replace the first one in the PS. You can manually switch between the change sets, but can’t include them both in the PS at once.
You can manually add someone else’s change set to your own PS, if desired. Remember that you only need the PS context if you are linking across applications; if you’re working only in DNG, you can simply change to the change set context within the local RM configuration.
Can create link from change set to WI, but can’t create it FROM the WI. Note that the link can be to ANY OSLC change request in any OSLC change management provider, whether or not they support configuration management. (This is the only DNG cross-application link that behaves this way.)
Can’t hold reviews against DNG change sets. (You can against DM change sets.) You can share change sets (i.e. load someone else’s change set) and let others add comments directly to the artifacts (outside the review paradigm), or you can define some kind of “integration stream” or “review stream” to deliver your change sets, hold reviews, before delivering to the main stream.
Can’t report against change sets.
Once you deliver, you can’t see what was inside the DNG change set. You can compare it to a baseline before you deliver. If you linked it to a WI, you could manually capture information about the changes.
Note that you can see the list of changed artifacts in DM change sets after you deliver.
DNG now provides merge capabilities to resolve conflicts between change sets or streams. To avoid conflicting change sets in DNG, use small ones and deliver them quickly, or minimize the # of people working on a given module/artifact/set of artifacts.
If your project has multiple change sets, you’ll need to be able to differentiate them (as well as have some kind of idea which changes were delivered). Implicit change sets are very simple to use – but not so useful in a more complex environment. They have very simple names, and can be very difficult to recognize/reconcile with the changes you made. If you must deliver across streams, explicit change sets are the way to go.
To add streams/baselines to a GC, the GC lead/user only needs general membership in the application (RM, QM, DM, RTC) project. If the GC lead/user will generate streams/baselines (using the new automation from the GCM app), s/he also needs permissions to create and manage streams/baselines in each application project area.
For RTC streams, read access could be further limited by team ownership, in which case, the GC configuration lead would need to be a member of that team if that stream were to be included in a GC.
LQE provides project-level access control based on what is specified in each project area. You can also edit the permissions in LQE itself to remove or add users to the access control list. Note that the list is synchronized regularly with the application project area.
The GCM application doesn’t actually link to the other project areas (in Associations property); it currently uses only the server Friends to find contributions. Because all team members who reference a GC must be members in the project, it may be easier to create a Lifecycle Project that includes GCM as well as the other applications required, in order to add users to all projects at once. Including GCM in a lifecycle project has no other effect or implication in this release.
OOTB reports (including GC): PLE and configuration aware OOTB Reports (tracked by 357440)
GC reporting: [JAF][GC] Provide ACP 2.0 TRS feed. (tracked by 357440)
Example of component skew: Your GC includes 2 other GC components – each includes a contribution from the same RM project, but each has a different version of that contribution. When you set your RM context to your GC, whichever RM contribution shows up first in the GC (i.e. whichever component is first in the hierarchy tree) is the one that gets loaded and will be used in resolving links etc.
If links and configurations aren’t resolving as you expect, use the GCM ability to find skew and verify that if you do have skew, you’re ordering configurations to resolve the way you want
Multi-GC topology: [JAF][GC] Leverage project area association to scope GCs
GC/component granularity: [CLM] Implement Fine Grain Components across the ALM Solution (for GC components)
GC complexity mgmt/nav: [JAF][GC] Support custom attributes, links, and tags on GCs ,
[JAF][GC] Improve ease of construction and use of GC trees
Access permissions: [JAF][LV] Support for configuration level write permissions , [JAF][GC] Improve access control for GCs
GC audit history: [JAF][GC] Show audit history for GC
[JAF][GC] GC application should support more advanced capabilities
Here’s a high level architecture for CLM 5.x
Here’s the high level architecture for CLM 6.x and as you can see, a number of changes were needed in order to support PLE/configuration management.
We won’t cover all the details but will point out a few.
The DNG/RQM core applications went through significant changes to include configuration management as did their web UIs to include the selection/use of a configuration context.
RTC was modified in order to navigate from a WI to a linked versioned artifact.
A new Global Configuration Management application/service was added.
Reporting updates were made to work with configuration aware data..this is Tech Preview only and will be discussed later.
The OSLC linking strategy changed to remove the use of stored backlinks. This had cross application impact as well as the creation of a new link indexing service.
Links can be created from source or target side, but internally the request is always processed by the source side
If the source side of a link is a baseline, the link cannot be modified