This document discusses different stream strategies for software development using Rational Team Concert (RTC). It describes single stream development where all work is delivered to a single stream. It also covers multiple release development with dedicated development and maintenance streams. Multiple application development uses streams to segregate development for multiple software components. The document demonstrates adding a component to a workspace and delivering changes to different streams. It presents two use cases - one for a small team with multiple components and one for a large team. It concludes with asking for any questions or discussion.
Convert Matlab application into executable binary stand alone application. After that I have also explained how to integrate it into Python. Sample code screenshots attached
Convert Matlab application into executable binary stand alone application. After that I have also explained how to integrate it into Python. Sample code screenshots attached
Papyrus for Real Time (a.k.a., Papyrus-RT) has made great stride in its development and and is standing on the verge of a 1.0 release. Discover what Papyrus-RT is about and what are our plans for its release.
This was presented at the Ericsson Modeling Days 2016 held in Kista (Stockholm), Sweden in September 2016.
Related to meetup: https://www.meetup.com/Kubernetes-Tel-Aviv/events/248536747/
Codefresh - using Helm to provision ad-hoc environments
Oleg from Codefresh will share how they have used Helm to spin up an complete working version of their product (including all services and dependencies) on each commit, allowing developers and other stakeholders to test, experiment, and communicate on work in progress.
Oleg is working as Backend engineer at Codefresh, a Kubernetes native CI/CD platform.
For the past year focusing on #helm and #K8S in research and implementation of Codefresh roadmap.
In his spare time Oleg is maintaining his own projects, traveling and reading.
The demo application: https://github.com/olegsu/kubernetes-tlv
Papyrus is an open source UML modeling tool based on Eclipse technology. After many years of incubation, it will be released in June as part of the Eclipse Luna common release.
This presentation describes what is Papyrus, why it was originally created, who's using it, and what the future may hold.
This presentation was given as part of Eclipse Day Montreal 2014, held June 10, 2014.
Azure DevOps added the Multi-Stage Pipelines as a part of the Pipeline offering which enables version controlled Ci/CD expressed as YAML.
These slides were based on the information available in Aug-2019 on how a pipeline can be constructed.
Building a pipeline to Destroy Los Angeles in 2012 - Siggraph Asia 2009hpduiker
A presentation on Bento, the central pipeline component developed at Digital Domain to enable the creation of the LA Destruction sequence of 2012 at Digital Domain.
This webinar will instruct data scientists and machine learning engineers how to build manage and deploy auto-adaptive machine learning models in production. Data is ever changing, leaving your models outdated and built on old data. This can lead to underperforming models and a lot of manual work to fix it. By allowing your models to continually learn you’ll ensure that they run at peak performance. Using state of the art Kubernetes infrastructure, we’ll show you how to automatically track and manage your auto-adaptive machine learning models while in production. By building auto-adaptive machine learning models, data engineers can bridge the gap between research and production. After this webinar you’ll be able to build and deploy machine learning pipelines that automatically adapt and retrain based on any validation trigger you choose.
Key webinar takeaways:
How to build auto-adaptive machine learning pipelines
How to use Kubernetes to manage and scale models in production
How to automatically monitor for peak performance
How to set up continuous deployment of ML pipeline
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
Papyrus for Real Time (a.k.a., Papyrus-RT) has made great stride in its development and and is standing on the verge of a 1.0 release. Discover what Papyrus-RT is about and what are our plans for its release.
This was presented at the Ericsson Modeling Days 2016 held in Kista (Stockholm), Sweden in September 2016.
Related to meetup: https://www.meetup.com/Kubernetes-Tel-Aviv/events/248536747/
Codefresh - using Helm to provision ad-hoc environments
Oleg from Codefresh will share how they have used Helm to spin up an complete working version of their product (including all services and dependencies) on each commit, allowing developers and other stakeholders to test, experiment, and communicate on work in progress.
Oleg is working as Backend engineer at Codefresh, a Kubernetes native CI/CD platform.
For the past year focusing on #helm and #K8S in research and implementation of Codefresh roadmap.
In his spare time Oleg is maintaining his own projects, traveling and reading.
The demo application: https://github.com/olegsu/kubernetes-tlv
Papyrus is an open source UML modeling tool based on Eclipse technology. After many years of incubation, it will be released in June as part of the Eclipse Luna common release.
This presentation describes what is Papyrus, why it was originally created, who's using it, and what the future may hold.
This presentation was given as part of Eclipse Day Montreal 2014, held June 10, 2014.
Azure DevOps added the Multi-Stage Pipelines as a part of the Pipeline offering which enables version controlled Ci/CD expressed as YAML.
These slides were based on the information available in Aug-2019 on how a pipeline can be constructed.
Building a pipeline to Destroy Los Angeles in 2012 - Siggraph Asia 2009hpduiker
A presentation on Bento, the central pipeline component developed at Digital Domain to enable the creation of the LA Destruction sequence of 2012 at Digital Domain.
This webinar will instruct data scientists and machine learning engineers how to build manage and deploy auto-adaptive machine learning models in production. Data is ever changing, leaving your models outdated and built on old data. This can lead to underperforming models and a lot of manual work to fix it. By allowing your models to continually learn you’ll ensure that they run at peak performance. Using state of the art Kubernetes infrastructure, we’ll show you how to automatically track and manage your auto-adaptive machine learning models while in production. By building auto-adaptive machine learning models, data engineers can bridge the gap between research and production. After this webinar you’ll be able to build and deploy machine learning pipelines that automatically adapt and retrain based on any validation trigger you choose.
Key webinar takeaways:
How to build auto-adaptive machine learning pipelines
How to use Kubernetes to manage and scale models in production
How to automatically monitor for peak performance
How to set up continuous deployment of ML pipeline
Watch all our webinars at https://cnvrg.io/webinars-and-workshops/
We have learned the Gitflow Workflow
Lab - Downloaded and Installed Git Windows Client - tortoisegit, created branch Develop, added HelloWorld.txt file on master and Develop
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
Orchestrate Your End-to-end Mainframe Application Release PipelineDevOps.com
What steel and concrete are to a skyscraper, the mainframe is to the global economy. The mainframe is the transactional backbone for 96 of the world’s top 100 banks, 23 of the 25 top US retailers and 9 out of 10 of the world’s largest insurance companies.
When you think of a mainframe, you probably think of an old green computer screen. Did you know you can use the same modern tools and techniques with mainframes that you use with cloud and mobile?
With the growth of mission-critical mainframe workloads showing no signs of slowing down, application delivery cannot remain slow and complex. Organizations must apply the same DevOps processes to the mainframe as they do with other platforms.
Compuware and XebiaLabs enable large enterprises to automatically build, test and deploy mainframe releases within a cross-platform application release pipeline.
Achieving Full Stack DevOps at Colonial Life DevOps.com
In an ever more competitive marketplace, organizations have turned to Agile and DevOps practices to deliver software innovations to market more quickly and with high quality. Across industries, companies are making heavy investments in tools and process improvements around automated build, test, continuous integration and delivery, and release automation and orchestration. However, despite these investments, many organizations are still struggling to bring the necessary speed and quality to their software delivery. In many cases, this is because Agile and DevOps improvements have not been applied to the entire software stack and are often limited to application code delivery.
This webinar will explore the transformation that Colonial Life made in bringing DevOps to the entire software stack. Specifically, beyond automating and accelerating the validation and delivery of application code, this webinar will focus on the critical role that data and the database play in modern software delivery and the tools and processes that can bring the same automation to database code.
After this webinar, you will understand:
* What holds organizations back despite an Agile application development process
* The benefits of automating the validation and deployment of database changes
* A template for bringing DevOps to the entire software stack
Pragmatic Monolith-First, easy to decompose, clean architecturePiotr Pelczar
Designing systems architecture corresponding to business needs in long future is like a reading tea leaves. There is no common way to design systems. Making decision to start project with microservices may make refactoring much harder and introduce too much complexity in the infrastructure layer and finally slow down development. However maintaining a monolith is a tough nut to crack.
Let’s see how to build a system starting from well organized monolith with well marked technical and business scopes that enables to make a decision in with way it should be decomposed and how to deliver it. Strategic and tactical techniques from Domain-Driven Design and Hexagonal Architecture will be used. I will show you how to monitor accidential complexity using different tools during CI.
I invite you if you are interested in building systems with complex business domains.
IBM ConnectED 2015 - BP106 From XPages Hero To OSGi Guru: Taking The Scary Ou...Paul Withers
BP106 From XPages Hero To OSGi Guru: Taking The Scary Out Of Building Extension Libraries. From IBM ConnectED 2015, delivered jointly with Christian Guedemann
Are you tired of the ever-increasing complexity in the world of DevOps? Do Docker and Kubernetes scripts, Ansible configurations, and networking woes make your head spin? It's time for a breath of fresh air.
Join us on a transformative journey where we shatter the myth that DevOps has to be overly complicated. Say goodbye to the days of struggling with incomplete scripts and tangled configurations. In this enlightening talk, we'll guide you through the process of rapidly onboarding your new standard microservice into the DevOps and Cloud universe.
We'll unveil the power of GitHub Actions, AWS, OpenAI API, and MS Teams Incoming Web hooks in a way that's both enlightening and entertaining. Additionally, we'll explore how Language Model APIs (LLMs) can be leveraged to enhance and streamline your DevOps workflows. You'll discover that DevOps doesn't have to be a labyrinth of complexity; it can be a streamlined and enjoyable experience.
So, if you're ready to simplify your DevOps journey and embrace a world where AWS, the OpenAI API, and GitHub Actions collaborate seamlessly while harnessing the potential of LLMs, join us and let's make DevOps a breeze!
2. Single Stream Development
• "straight line development“
• simplest form of software development
• small teams and organizations
• all work is delivered to a single stream, which is
associated with the project/team
4. Naming Conventions
• Components – “Xxx_comp” for example Acme_comp
• Streams – “Xxx-xxx” for example Acme-dev, Acme-
integ, Acme-rel-3.0
• Workspaces – “proj_wkspc_userid_xx” for example
Acme_wkspc_dtoczala_dev,
Acme_wkspc_integration_build
• Plans – “desc time plan” for example Acme Sprint 1
Plan, Acme Release 2.0 Plan, Jade Team Sprint 7 Plan
5. Multiple Release Development
• the team is working on release X, but still needs to
support and do maintenance for releasex-1, x-2, and
so on
• a dedicated development stream.
• maintenance streams begin with the released
baselines
8. Multiple Application Development
• large scale development efforts
• development of multiple software components or
applications
• streams are used to segregate development efforts,
and to control development environments
10. Demo – Add a Comp to Wkspc
• open your workspace
• add the "missing" component to your workspace
• save the workspace.
• load that component into your sandbox
• do not make changes to that component,
because you will not be able to deliver them to
the stream
11. Demo – Add a Comp to Wkspc
• Can Dan deliver changes of the Gamma_Comp
to the Gamma development stream, and
changes to Acme_Comp to the Acme
development stream? Yes he can
• He would need to deliver any Acme component
based change sets to that development stream,
and any Gamma component based change sets
to the Gamma development stream. So he would
deliver first to one stream, and then repoint his
workspace at the second stream.
We sometimes refer to single stream development efforts as "straight line development". This is the simplest form of software development, and it requires the least amount of administration and oversight. It is easy to set up, easy to understand, and works for well for small teams and organizations. We often see this style of stream strategy with IT shops, R&D efforts, and with teams that have an ongoing application maintenance effort.
In single stream development, all work is delivered to a single stream, which is associated with the project/team. Developers create one or more workspaces which deliver to this stream, as well as the sandboxes on their local machines which are related to these workspaces. Each developer/contributor works in their sandbox, and will check in their changes to their repository workspace. As change sets are created, they are associated with work items. When these work items are completed, their associated change sets are then delivered to the project/team stream, and a new baseline may be created.At this point these changes then become visible to the remainder of the team, who then will accept the incoming change sets. Figure 1 is a diagram of what a single stream strategy looks like when visualized in RTC. Note how each workspace is associated with an individual developer, tester, analyst or stakeholder, and all workspaces have the same project/team stream as their delivery target.So each of the users (Alan, Deb, Dan, Don and Tim) has their own repository workspace. Any changes that they check in are only visible to them, and are stored in their repository workspace. Once those changes are ready to be shared with the rest of the team, they are delivered to the team stream (Acme Stream), and they will then appear as incoming changes for the remaining team members.
I need to take a slight detour here to address naming conventions. Naming conventions are critical to a successful deployment of Jazz. The tools don't care what you name things, but since HUMANS need to interact with the tools, naming conventions become a critical piece of the solution. Good naming conventions will allow anyone interacting with the software development environment to easily navigate and find the files, information, and data that they are looking for. Table 1 shows a sample of some naming conventions that I have seen in use with organizations using RTC.The key to having useful and successful naming conventions is to have everyone using those naming conventions. They should be simple to remember, simple to use, and ease the understanding of the people using the tools. Complicated naming conventions get ignored. Don't worry about handling every possible case with your naming conventions, that will make them too complex. There will always be exceptions to the naming conventions, just make sure the the goal of having naming conventions is met. The goal of this is to make things easier for the people using the software development environment.
In multiple release development, developers may be expected to work on the new release, work on maintenance (bug fixes), or both. Developers will have to determine where most of their work is being done, so they can decide on how many Jazz workspaces to use, and where to base those workspaces. Teams need to have strong naming conventions for their streams and workspaces, in order to maintain a sense of order, and so everyone can easily find the things that they might be looking for. So once you have some good naming conventions in place, for the software components, the stream, and the workspaces, it is time to begin organizing those streams.
Just looking at what we see in Figure 2, we can immediately see how things are being done. I see the Acme development stream at the middle of this diagram, labeled as Acme-Development. I can see that Tim has a workspace that is based on the Acme development stream (called Acme_wkspc_tim_dev), and that his current and default targets for the delivery of any changes is the Acme development stream. Don and dtoczala have workspaces with similar relationships to the Acme development stream.The interesting cases are those of Deb and Alan. Deb has a workspace (Acme_wkspc_deb_dev) that shows a dotted line relationship to the Acme development stream, but has a solid line and default relationship to the Release 1.0 stream, called Acme-release-1-0. The default tag indicates that this is her default delivery target, and the solid line indicates that this is also her current delivery target.When we look at Alan's workspace, we see that it is similar to the situation Deb is in, with Alan's current delivery target being the Release 1.0 stream. What is different is that Alan's default target for delivery is the Acme development stream (you can see the label on that arrow).
Following the delivery, we have a new stream called Acme-release-1-1, which represents the code in the 1.1 release of our Acme component. Note that Deb now is doing maintenance work on Acme release 1.1, and Alan is doing maintenance work on Acme release 1.0. Both Dab and Alan have dotted lines coming from their workspaces, indicating that they also deliver change sets to alternative targets. The remainder of the team is still delivering their work to the Acme development stream.One thing to note in Figure 3 is the direction of the flow of changes. In this scenario we show the changes flowing from the development stream to the two maintenance streams. The arrows could just as easily be pointing in the opposite direction, it just depends on where you initially make changes. In this scenario, we make our changes on the code currently being developed, and then flow those changes to the maintenance streams. In the case of Alan and Deb, they may deliver their changes to the maintenance stream first, and then deliver those same change sets to the current development stream. What is important here is that you show the relationships between the streams.We typically recommend that changes flow from maintenance streams to the current development stream, since this is the way the work is typically done (I deliver my maintenance fix first, then I incorporate that fix into the current development stream). RTC will support flowing changes in either direction, so choose the one that best fits the way that you develop and maintain your software.Another thing to note is the relationship between the streams. When you indicate a flow target for a stream, you must realize that this is just the identification of a relationship. There are no enforced semantics that are associated with this relationship. You cannot deliver changes from one stream to another.The key here is the use of baselines (and snapshots) to identify key configurations. Each stream will have their own set of component baselines, which should represent the stable state of the stream at different points in time.
In this example I am showing the workspace owners, and not the workspacenames. You can control this by checking and unchecking the "Show name instead of owner" checkbox in the Properties tab when you click on the workspace in the diagram.Here we see our developers dedicated to the development of a single software component or application. Alan is working on the Acme development effort, and Don is working on the Beta development effort. Development in each of these streams occurs only on those individual components. The Systems integration stream, called System-int, then uses changes from each of these development streams, and the integration of the components (or applications) is done in this stream.Sometimes these efforts on the individual components will require visibility to the other components available to the project. If you look at Dan's workspace in the lower right of Figure 4, you will notice that Dan has both the Acme_comp and Gamma_comp in his workspace, but only the Gamma_comp component is available in the Gamma development stream. Dan will be able to build and test his work, but will be unable to deliver any changes to the Acme_comp component.With this type of strategy, teams can work in isolation and integration activities can be coordinated without slowing development of the individual components or applications.
The previous examples are geared towards a large-scale development team. Consider Figure 1 and Figure 4 in light of a smaller development team (let's say five people). Also let's use example component names of "Client", "Server", and "Test" instead of alpha/beta/gamma.A small team would have one stream "Acme Integration". It would contain the three components "Client", "Server", and "Install". Dan and Alan are server developers so they only load "Server". Deb is a client developer and loads "Client". Tim does install loading "Install". Don is the team lead and has his hands in everything, so he loads all three components. (This is the modified figure 1).Fast-forward in time. The Acme project is wildly successful and there is a new feature list five miles long. Don's manager hires thirty new developers and puts ten on each component. Now, figure 4 makes a lot more sense. Don our trusty team lead is promoted to architect. He sets up three streams so that each group of developers can work in isolation from the others. Don is in charge of integrating changes from the individual streams into the main integration stream. Alan, Deb, and Tim are appointed component leads and maintain the individual streams. The other 30 developers only load the one component they need visibility to.
The previous examples are geared towards a large-scale development team. Consider Figure 1 and Figure 4 in light of a smaller development team (let's say five people). Also let's use example component names of "Client", "Server", and "Test" instead of alpha/beta/gamma.A small team would have one stream "Acme Integration". It would contain the three components "Client", "Server", and "Install". Dan and Alan are server developers so they only load "Server". Deb is a client developer and loads "Client". Tim does install loading "Install". Don is the team lead and has his hands in everything, so he loads all three components. (This is the modified figure 1).Fast-forward in time. The Acme project is wildly successful and there is a new feature list five miles long. Don's manager hires thirty new developers and puts ten on each component. Now, figure 4 makes a lot more sense. Don our trusty team lead is promoted to architect. He sets up three streams so that each group of developers can work in isolation from the others. Don is in charge of integrating changes from the individual streams into the main integration stream. Alan, Deb, and Tim are appointed component leads and maintain the individual streams. The other 30 developers only load the one component they need visibility to.
It is also possible to use a repository workspace as a team area, rather than a stream.What are the pros and cons of using a repository workspace rather than a stream?You can use a repository workspace the same way that you would use a stream, but with this model everything that I check in will become immediately visible to the rest of my teammates. Even if it doesn't compile. You also run a stronger risk of creating dependencies between work items. Take the case where you and I are working on file "foo". If we each have our own repository workspaces, attached to the same stream, then I can check in my changes 2, 5, or 500 times, and you do not see my changes until I either deliver them to the stream, or (if you want to see them before delivery) until you apply my change set to your repository workspace.If we both have our local workspace (sandbox) connected to the repository workspace, then our changes to "foo" are immediately visible to the other developer. What is worse, is that if I change "foo", then you change "foo", and then I change "foo" some more, then we have creatd a situation where I am unable to deliver my change set unless you change set is delivered at the same time (since my second change may depend on your changes). If we are each in our own repository workspaces, then this is not an issue.
לסיכום, דעתי האישית היא שהבחירה צריכה להתבצע בעיקר לפי סביבת העבודה של המפתח, מערכת ההפעלה והIDE איתם הוא עובד.