Let us start by looking at the architecture has evolved in the past.
Talk in terms of Miro services so that audience can relate.
Let them explore the Amazon PDP page.
Talk in very laymans terms
Build time is like npm packages. Relate them to jar files.
Run time is downloading the artifact on run time.
Focus more on how testing can be performed on sliced micro-fontends and also how Integration of different Micro-Frontends can be tested
The test pyramid -> A classic metaphor in automated testing is the test pyramid:
it's been a useful model for ages, and still a good conversation starter, but it lacks a few things:
It isn't clear really what the horizontal dimension is - does wider mean more tests? More test scenarios? More features tested?
It isn't clear what the vertical dimension is - where do contract tests fit? What if you test at an API not a UI? Should you test in a particular order?
In many cases the best shape is nothing like a pyramid - some systems are well suited to integration tests, and tend to a pear-shape. Some systems are more of an hourglass, with lots of UI/API tests and lots of unit tests and not much between.
I've seen people tweak the pyramid - adding layers, adding axes, adding explanations. And in some point of time it can be confusing too.
A different perspective - Swiss Cheese
the Swiss Cheese model of testing . It's a lot more helpful when it comes to talking about both why we test, and how we should test.
The basic idea is: consider your tests like a big stack of swiss cheese slices - you know, the kind with holes in them:
Now layer those cheese slices vertically - each layer represents a different kind of tests. Order them in the order you run them - usually simplest, fastest feedback first, then slower layers below them:
You can imagine defects as physical bugs which fall down the diagram, and are caught at different levels - different slices of cheese.
Some bugs might fall all the way through a series of holes and not get caught. This is bad.
I do think there's value in some browser-driven end-to-end tests - there are bugs we can only catch that way.
But we can call them "smoke tests" or "end-to-end tests" or something, not "acceptance tests".
If we want lots of tests of UI features, consider tools that test within the framework we use - we might want several "cheese slices" of UI tests - see how the material UI tests are layered for an example.
Lets understand the testing strategy by going through the advantages of the architecture.
Tailor made Quality Strategy can be build for Individual MFE based on their use case.
Quality Strategy can include
Ways of working
Definition of Done
Satisfy x-functional testing needs
Performance [UI/API]
Security
Accessibility
x-platform
Bug bash
Testing layers
Brijendra
Brijendra
Brijendra
With all the good things, comes its challenges also.
An application can have multiple e2e solution and repositories : there might be some level of duplicate tests in app level automation suite, and that is perfectly ok considering the ways these tests are executed.