1) Documentation is important for collaborating on projects and preserving knowledge but can be costly, so finding the right balance is important.
2) There are different types of documentation for different purposes and audiences, from informal communication to more formal specifications and models.
3) Agile documentation practices emphasize creating simple documents iteratively as needed, publishing them for feedback, reusing content, and using basic tools to keep the focus on content over presentation.
A taxonomy of functions related to Enterprise Content Management. The model can be used to provide insight into the current and desired support for content in the organization.
JDD 2016 - Jacek Bukowski - "Flying To Clouds" - Can It Be Easy?PROIDEA
Nowadays "cloud" and "microservice" terms are used all the time, even overused. Does any system must be the "microservices" deployed in the "cloud"? Definitely not! However once you see that your system may benefit from that architecture, the next question is how to get there - how to fly to the clouds?
Spring was always about simplifying the complicated aspects of your enterprise system. Netflix went to microservice architecture long before this term even was created. Both are very much contributed to open source software. How can you benefit from joint forces of the both?
JDD 2016 - Jakub Marchwicki - 6 Tips For You JavaEE Project Be Less Depresing PROIDEA
People are stuck to JavaEE not because they love it, but because it was in their work place, it is still there and will be there long after they leave. While many engineers think about going reactive, functional (or any other fashionable buzzword), some are still struggling how to migrate weblogic9 to weblogic12 and move from Java not supported for past 8 years (like Java 1.4) to the one not supported only just for a year (Java7). While their live can be miserable - there are means for them to win JavaEE back, have fun again. It still won’t be a super duper modern framework, it won’t have all the latest pre-summer goodies, won’t be a sweet language (full of syntactic sugar candies), but a solid set of tools that can make development pleasure again (kind of). In the modern JavaEE you are no logger limited to the 'standard' but you can take advantage of limitless set tools and libraries. For some people it will still be the bad and ugly JavaEE, "feels like Spring but n-years ago", "Spring wannabe in the poor’s man land" - that’s true. But if you can’t win Them (and let’s be honest - in most enterprise’ish environments you can’t win Them), join them! And have you JavaEE the way You like it. This presentation will walk through multiple libraries that can help you change your project from "This is The Standard Even If It Doesn’t Fit" way to a something far more approachable, where your tools doesn’t feel like the Maslow’s Hammer. I’ll be sharing some experience from multiple trainings, projects and refactorings of old-but-still-in-production JavaEE applications and tools we used to get away The Standard.
JDD 2016 - Michal Matloka - Small Intro To Big DataPROIDEA
Pig, Hive, Flink, Kafka, Zeppelin... if you now wonder if someone just tried to offend you or are those just Pokemon names, then this talk is just for you! Big Data is everywhere and new tools for it are released almost at the speed of new JavaScript frameworks. During this entry level presentation we will walk though the challenges which Big Data presents, reflect how big is big and introduce currently most fancy and popular (mostly open source) tools. We'll try to spark off interest in Big Data by showing application areas and by throwing ideas where you can later dive into.
JDD 2016 - Jakub Kubrynski - Jpa - beyond copy-pastePROIDEA
JPA is the main building block in most Java projects. However, a lot of developers still use it without a deep understanding of the technology, relying mainly on applying the copy-paste methodology from StackOverflow or existing system entities. During this presentation, I will consolidate knowledge about object-relational mapping. We'll see how lazy loading works under the hood and understand the difference between a set, list or bag. We will also talk about common traps leading to significant decreases in performance or improper behaviour of the system.
JDD 2016 - Marcin Zajaczkowski - Cd for open sourcePROIDEA
Masz projekt, w którym wydawanie nowej wersji wymaga ręcznego wykonywania serii komend, jest błędogenne i po prostu nudne? A może chciałbyś zacząć nowy projekt open source z Continuous Delivery, ale przeraża Cię konfiguracja, którą trzeba wykonać, aby wyda wersja byłą w wygodny sposób dostępna dla innych? W czasie prezentacji pokażę w jaki sposób w ciągu 15 minut (*) możesz stworzyć szkielet swojego nowego projektu open source z działającym mechanizmem automatycznego wydawania wersji do Maven Central po (każdym) commicie. Będziemy operować na stosie opartym o Java/Groovy/Scala/Kotlin, Gradle, GitHub, Travis i Maven Central. Jeżeli Twoja istniejąca aplikacja używa podobnych rozwiązań wdrożenie nowego mechanizmu powinno zająć niewiele więcej czasu. Continuous Delivery jest trudne z (przynajmniej) dwóch powodów. 1. Projekt musi być tak dobrze napisany i przetestowany, aby nie obawiać się wydać wersji w dowolnej chwili bazując jedynie na wyniku testów automatycznych (bez ręcznego testowania). 2. Czynności związane z wydaniem wersji (zarządzanie numeracją wersji, testowaniem, budowanie paczek, tagowanie zmian, czy wysyłka do publicznego repozytorium paczek) zazwyczaj nie są trywialne do automatyzacji. W dbaniu o jakość i automatyczne testowanie naszego rozwiązania nikt nas nie wyręczy. Jednak mechanizmy do automatycznego wdrażania to przede wszystkim infrastruktura i tu ponowne "wymyślanie koła" (szczególnie w przypadku bibliotek open source z prostym release workflow) najczęściej nie jest najbardziej optymalnym rozwiązaniem. Dowiedź się, jak wykorzystać istniejące rozwiązania - szybko i prosto
A taxonomy of functions related to Enterprise Content Management. The model can be used to provide insight into the current and desired support for content in the organization.
JDD 2016 - Jacek Bukowski - "Flying To Clouds" - Can It Be Easy?PROIDEA
Nowadays "cloud" and "microservice" terms are used all the time, even overused. Does any system must be the "microservices" deployed in the "cloud"? Definitely not! However once you see that your system may benefit from that architecture, the next question is how to get there - how to fly to the clouds?
Spring was always about simplifying the complicated aspects of your enterprise system. Netflix went to microservice architecture long before this term even was created. Both are very much contributed to open source software. How can you benefit from joint forces of the both?
JDD 2016 - Jakub Marchwicki - 6 Tips For You JavaEE Project Be Less Depresing PROIDEA
People are stuck to JavaEE not because they love it, but because it was in their work place, it is still there and will be there long after they leave. While many engineers think about going reactive, functional (or any other fashionable buzzword), some are still struggling how to migrate weblogic9 to weblogic12 and move from Java not supported for past 8 years (like Java 1.4) to the one not supported only just for a year (Java7). While their live can be miserable - there are means for them to win JavaEE back, have fun again. It still won’t be a super duper modern framework, it won’t have all the latest pre-summer goodies, won’t be a sweet language (full of syntactic sugar candies), but a solid set of tools that can make development pleasure again (kind of). In the modern JavaEE you are no logger limited to the 'standard' but you can take advantage of limitless set tools and libraries. For some people it will still be the bad and ugly JavaEE, "feels like Spring but n-years ago", "Spring wannabe in the poor’s man land" - that’s true. But if you can’t win Them (and let’s be honest - in most enterprise’ish environments you can’t win Them), join them! And have you JavaEE the way You like it. This presentation will walk through multiple libraries that can help you change your project from "This is The Standard Even If It Doesn’t Fit" way to a something far more approachable, where your tools doesn’t feel like the Maslow’s Hammer. I’ll be sharing some experience from multiple trainings, projects and refactorings of old-but-still-in-production JavaEE applications and tools we used to get away The Standard.
JDD 2016 - Michal Matloka - Small Intro To Big DataPROIDEA
Pig, Hive, Flink, Kafka, Zeppelin... if you now wonder if someone just tried to offend you or are those just Pokemon names, then this talk is just for you! Big Data is everywhere and new tools for it are released almost at the speed of new JavaScript frameworks. During this entry level presentation we will walk though the challenges which Big Data presents, reflect how big is big and introduce currently most fancy and popular (mostly open source) tools. We'll try to spark off interest in Big Data by showing application areas and by throwing ideas where you can later dive into.
JDD 2016 - Jakub Kubrynski - Jpa - beyond copy-pastePROIDEA
JPA is the main building block in most Java projects. However, a lot of developers still use it without a deep understanding of the technology, relying mainly on applying the copy-paste methodology from StackOverflow or existing system entities. During this presentation, I will consolidate knowledge about object-relational mapping. We'll see how lazy loading works under the hood and understand the difference between a set, list or bag. We will also talk about common traps leading to significant decreases in performance or improper behaviour of the system.
JDD 2016 - Marcin Zajaczkowski - Cd for open sourcePROIDEA
Masz projekt, w którym wydawanie nowej wersji wymaga ręcznego wykonywania serii komend, jest błędogenne i po prostu nudne? A może chciałbyś zacząć nowy projekt open source z Continuous Delivery, ale przeraża Cię konfiguracja, którą trzeba wykonać, aby wyda wersja byłą w wygodny sposób dostępna dla innych? W czasie prezentacji pokażę w jaki sposób w ciągu 15 minut (*) możesz stworzyć szkielet swojego nowego projektu open source z działającym mechanizmem automatycznego wydawania wersji do Maven Central po (każdym) commicie. Będziemy operować na stosie opartym o Java/Groovy/Scala/Kotlin, Gradle, GitHub, Travis i Maven Central. Jeżeli Twoja istniejąca aplikacja używa podobnych rozwiązań wdrożenie nowego mechanizmu powinno zająć niewiele więcej czasu. Continuous Delivery jest trudne z (przynajmniej) dwóch powodów. 1. Projekt musi być tak dobrze napisany i przetestowany, aby nie obawiać się wydać wersji w dowolnej chwili bazując jedynie na wyniku testów automatycznych (bez ręcznego testowania). 2. Czynności związane z wydaniem wersji (zarządzanie numeracją wersji, testowaniem, budowanie paczek, tagowanie zmian, czy wysyłka do publicznego repozytorium paczek) zazwyczaj nie są trywialne do automatyzacji. W dbaniu o jakość i automatyczne testowanie naszego rozwiązania nikt nas nie wyręczy. Jednak mechanizmy do automatycznego wdrażania to przede wszystkim infrastruktura i tu ponowne "wymyślanie koła" (szczególnie w przypadku bibliotek open source z prostym release workflow) najczęściej nie jest najbardziej optymalnym rozwiązaniem. Dowiedź się, jak wykorzystać istniejące rozwiązania - szybko i prosto
JDD 2016 - Tomasz Gagor, Pawel Torbus - A Needle In A LogstackPROIDEA
Case study on how a well thought through log analysis that enable mobile developers to get e clearer picture of how their mobile app performs across a spectrum of devices. And how the information contained in logs when presented in a Human readable manner can have a tremendous impact on problem trouble shooting, deployments, and provide valuable business feedback. How to see the mobile end of an e-publishing platform. Currently a signify cant number of systems and apps need to work in distributed meaner. For back-end this means a cluster of servers, multiple availability zones or regions. For mobile a an astonishing number of mobile devices, with different and constantly changing characteristics Tests and code analysis do no always provide the an answer on how the users/devices work with the app created. we need to get true data from “the wild”. Event collecting/analyzing systems allow us to gather the data, filter it, transform it and swiftly act upon. Enter the world of event collecting, processing, visualizing and integrating it into an ecosystem. Discover it with more ease learning form our successes as well as mistakes.
Jedną z podstawowych cech kilkuletniego przedstawiciela homo-sapiens jest to, że zadaje on nieograniczoną liczbę pytań takich jak: "czemu?', "dlaczego?", "po co?", "kto?", "jak?". Posiadając w domu dwa małe monstra, oprócz próśb o słodycze, w każdej chwili jestem narażony na uderzenie takim niewygodnym pytaniem. Jeżeli odpowiedź na nie będzie nieścisła, niewyraźna czy nieodpowiednia, mogę doświadczyć ataku ze zdwojoną mocą, nie mówiąc o skutkach wychowawczych. Na szczęście, a może i nie, dzieje się to tylko przez pewną chwilę. Potem, za dobrych 20 lat, niektórzy w kuluarach wykrzykują "Programming, Motherfucker!" wyznając Driven-Development, ale ten z literą R na początku, inni przeczesują strony dokumentacji w cierpieniu i poczuciu niesprawiedliwości, a niektórzy właśnie zaliczają projektową porażkę po spędzeniu kilku miesięcy, które okazują się pomyłką. W prezentacji tej chciałbym wrócić do korzeni, przypomnieć o jednym z podstawowych pytań z dzieciństwa i opowiedzieć o tym jak udało nam się zaoszczędzić wiele nerwów, siwych włosów i nieprzespanych nocy w obliczu nowych wymagań biznesowych do spełnienia w krótkim terminie.
JDD 2016 - Tomasz Lelek - Machine Learning With Apache SparkPROIDEA
How to use text data to draw conclusions about users of our website or forum?
This talk describes a solution to a particular problem, using Machine Learning and Statistics. Based on provided forum we will create the program that learns the structure of posts using Natural Language Processing technics. Then after proper Machine Learning models are trained, program is able to answer with probability which of the users of the forum wrote a particular post.
We will go through all the steps required to create Machine Learning models for text. How to use Natural Language Processing and Bag-of-Words techniques to analyse text? How to prepare input data to further Processing by Machine Learning Models? I will answer those questions. Implementation will be written in Apache Spark, so we will get to know that technology with some important libraries like Spark MLlib and DataFrame API. In MLlib we will use Gaussian Mixure Model and Logistic Regression.
JDD 2016 - Michal Bartyzel, Lukasz Korczynski - Refaktoryzacja Systemu eBanko...PROIDEA
REFAKTORYZACJA JAKO PRZYKŁAD ZMIANY ORGANIZACYJNEJ - CASE STUDY Z NORDEA BANK AB S.A.
"Skrócić czas tworzenia nowej tworzenia nowej funkcjonalności z kilku miesięcy do 30 dni" - to pierwszy cel, który sobie postawiliśmy. Po kilku miesiącach zaprowadził nad nową wersją naszego systemu, refaktoryzacji, eksperymentowania z nowymi technologiami, tworzenia gildii i bliższej współpracy z biznesem.
W trakcie prezentacji przedstawimy drogę oraz kluczowe punkty tej zmiany. W szczególności dowiesz się:
* Jak definiowaliśmy cele zmiany?
* Jak identyfikowaliśmy problemy technologiczne i organizacyjne?
* Jak przekonywaliśmy management do refaktoryzacji?
* Jak pozyskiwaliśmy kolejne osoby chętne do uczestniczenia w naszej inicjatywie?
JDD 2016 - Michał Balinski, Oleksandr Goldobin - Practical Non Blocking Micro...PROIDEA
We will show how to write application in Java 8 that do not waste resources and which can maximize effective utilization of CPU/RAM. There will be presented comparison of blocking and non-blocking approach for I/O and application services. Based on microservices implementing simple business logic in security/cryptography/payments domain, we will demonstrate following aspects: * NIO at all edges of application * popular libraries that support NIO * single instance scalability * performance metrics (incl. throughput and latency) * resources utilization * code readability with CompletableFuture * application maintenance and debugging All above based on our experiences gathered during development of software platforms at Oberthur Technologies R&D Poland.
Blazing Fast Feedback Loops in the Java UniverseMichał Kordas
We all know that fast feedback loops make a real difference and that they are the most important part of agile development in general. This is why I want to take you on a tour of a variety of ways to increase quality and optimize feedback loops that I’ve encountered in the JVM-based projects that I’ve worked on so far.
Successful Single-Source Content Development Xyleme
This presentation looks at why single-source content development is rapidly becoming a strategic initiative within organizations. Content management experts, Dawn Stevens of Comtech & Stuart Grossman of Xyleme, show you how to design granular content for reusability across products, functions & delivery modalities and assess your organization’s readiness for the move to single source. To view webinar please visit: http://www.xyleme.com/download-form?type_of_download=Webinar&nid=218
The Future is Now: Neuroscience, Chatbots, Voice, and MicrocontentSaiff Solutions, Inc.
Discover the role of microcontent as a core component of structured topics.
Presented by:
Barry Saiff - Founder and CEO, Saiff Solutions, Inc.
Rob Hanna - President, Precision Content Authoring Services
Islandora Webinar: Building a Repository Roadmapeohallor
Learn more about planning a repository project from people who have done it dozens of times. We'll dive into the important questions to ask, how to focus on users, and avoid the 'shopping list' approach.
The presentation will discuss a phased approach to repository planning that makes sense for any any institution or organization. The approach highlights prioritizing must haves, understanding dependencies and planning realistic timelines that include iteration.
DITA Quick Start Webinar Series: Building a Project PlanSuite Solutions
Presenters: Joe Gelb, President, Suite Solutions and Yehudit Lindblom, Project Manager, Suite Solutions
Abstract:
Migrating to DITA XML-based authoring and publishing promises rich rewards in terms of lower costs and faster time to publication. But DITA migration also requires a well-planned process that will lead you through all the steps of a successful implementation. In this webinar, experienced project manager Yehudit Lindblom and Joe Gelb will review a process that covers all the bases, helping you build your game plan for a winning DITA implementation.
Visit us at http://www.suite-sol.com
Follow us on LinkedIn http://www.linkedin.com/company/527916
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
JDD 2016 - Tomasz Gagor, Pawel Torbus - A Needle In A LogstackPROIDEA
Case study on how a well thought through log analysis that enable mobile developers to get e clearer picture of how their mobile app performs across a spectrum of devices. And how the information contained in logs when presented in a Human readable manner can have a tremendous impact on problem trouble shooting, deployments, and provide valuable business feedback. How to see the mobile end of an e-publishing platform. Currently a signify cant number of systems and apps need to work in distributed meaner. For back-end this means a cluster of servers, multiple availability zones or regions. For mobile a an astonishing number of mobile devices, with different and constantly changing characteristics Tests and code analysis do no always provide the an answer on how the users/devices work with the app created. we need to get true data from “the wild”. Event collecting/analyzing systems allow us to gather the data, filter it, transform it and swiftly act upon. Enter the world of event collecting, processing, visualizing and integrating it into an ecosystem. Discover it with more ease learning form our successes as well as mistakes.
Jedną z podstawowych cech kilkuletniego przedstawiciela homo-sapiens jest to, że zadaje on nieograniczoną liczbę pytań takich jak: "czemu?', "dlaczego?", "po co?", "kto?", "jak?". Posiadając w domu dwa małe monstra, oprócz próśb o słodycze, w każdej chwili jestem narażony na uderzenie takim niewygodnym pytaniem. Jeżeli odpowiedź na nie będzie nieścisła, niewyraźna czy nieodpowiednia, mogę doświadczyć ataku ze zdwojoną mocą, nie mówiąc o skutkach wychowawczych. Na szczęście, a może i nie, dzieje się to tylko przez pewną chwilę. Potem, za dobrych 20 lat, niektórzy w kuluarach wykrzykują "Programming, Motherfucker!" wyznając Driven-Development, ale ten z literą R na początku, inni przeczesują strony dokumentacji w cierpieniu i poczuciu niesprawiedliwości, a niektórzy właśnie zaliczają projektową porażkę po spędzeniu kilku miesięcy, które okazują się pomyłką. W prezentacji tej chciałbym wrócić do korzeni, przypomnieć o jednym z podstawowych pytań z dzieciństwa i opowiedzieć o tym jak udało nam się zaoszczędzić wiele nerwów, siwych włosów i nieprzespanych nocy w obliczu nowych wymagań biznesowych do spełnienia w krótkim terminie.
JDD 2016 - Tomasz Lelek - Machine Learning With Apache SparkPROIDEA
How to use text data to draw conclusions about users of our website or forum?
This talk describes a solution to a particular problem, using Machine Learning and Statistics. Based on provided forum we will create the program that learns the structure of posts using Natural Language Processing technics. Then after proper Machine Learning models are trained, program is able to answer with probability which of the users of the forum wrote a particular post.
We will go through all the steps required to create Machine Learning models for text. How to use Natural Language Processing and Bag-of-Words techniques to analyse text? How to prepare input data to further Processing by Machine Learning Models? I will answer those questions. Implementation will be written in Apache Spark, so we will get to know that technology with some important libraries like Spark MLlib and DataFrame API. In MLlib we will use Gaussian Mixure Model and Logistic Regression.
JDD 2016 - Michal Bartyzel, Lukasz Korczynski - Refaktoryzacja Systemu eBanko...PROIDEA
REFAKTORYZACJA JAKO PRZYKŁAD ZMIANY ORGANIZACYJNEJ - CASE STUDY Z NORDEA BANK AB S.A.
"Skrócić czas tworzenia nowej tworzenia nowej funkcjonalności z kilku miesięcy do 30 dni" - to pierwszy cel, który sobie postawiliśmy. Po kilku miesiącach zaprowadził nad nową wersją naszego systemu, refaktoryzacji, eksperymentowania z nowymi technologiami, tworzenia gildii i bliższej współpracy z biznesem.
W trakcie prezentacji przedstawimy drogę oraz kluczowe punkty tej zmiany. W szczególności dowiesz się:
* Jak definiowaliśmy cele zmiany?
* Jak identyfikowaliśmy problemy technologiczne i organizacyjne?
* Jak przekonywaliśmy management do refaktoryzacji?
* Jak pozyskiwaliśmy kolejne osoby chętne do uczestniczenia w naszej inicjatywie?
JDD 2016 - Michał Balinski, Oleksandr Goldobin - Practical Non Blocking Micro...PROIDEA
We will show how to write application in Java 8 that do not waste resources and which can maximize effective utilization of CPU/RAM. There will be presented comparison of blocking and non-blocking approach for I/O and application services. Based on microservices implementing simple business logic in security/cryptography/payments domain, we will demonstrate following aspects: * NIO at all edges of application * popular libraries that support NIO * single instance scalability * performance metrics (incl. throughput and latency) * resources utilization * code readability with CompletableFuture * application maintenance and debugging All above based on our experiences gathered during development of software platforms at Oberthur Technologies R&D Poland.
Blazing Fast Feedback Loops in the Java UniverseMichał Kordas
We all know that fast feedback loops make a real difference and that they are the most important part of agile development in general. This is why I want to take you on a tour of a variety of ways to increase quality and optimize feedback loops that I’ve encountered in the JVM-based projects that I’ve worked on so far.
Successful Single-Source Content Development Xyleme
This presentation looks at why single-source content development is rapidly becoming a strategic initiative within organizations. Content management experts, Dawn Stevens of Comtech & Stuart Grossman of Xyleme, show you how to design granular content for reusability across products, functions & delivery modalities and assess your organization’s readiness for the move to single source. To view webinar please visit: http://www.xyleme.com/download-form?type_of_download=Webinar&nid=218
The Future is Now: Neuroscience, Chatbots, Voice, and MicrocontentSaiff Solutions, Inc.
Discover the role of microcontent as a core component of structured topics.
Presented by:
Barry Saiff - Founder and CEO, Saiff Solutions, Inc.
Rob Hanna - President, Precision Content Authoring Services
Islandora Webinar: Building a Repository Roadmapeohallor
Learn more about planning a repository project from people who have done it dozens of times. We'll dive into the important questions to ask, how to focus on users, and avoid the 'shopping list' approach.
The presentation will discuss a phased approach to repository planning that makes sense for any any institution or organization. The approach highlights prioritizing must haves, understanding dependencies and planning realistic timelines that include iteration.
DITA Quick Start Webinar Series: Building a Project PlanSuite Solutions
Presenters: Joe Gelb, President, Suite Solutions and Yehudit Lindblom, Project Manager, Suite Solutions
Abstract:
Migrating to DITA XML-based authoring and publishing promises rich rewards in terms of lower costs and faster time to publication. But DITA migration also requires a well-planned process that will lead you through all the steps of a successful implementation. In this webinar, experienced project manager Yehudit Lindblom and Joe Gelb will review a process that covers all the bases, helping you build your game plan for a winning DITA implementation.
Visit us at http://www.suite-sol.com
Follow us on LinkedIn http://www.linkedin.com/company/527916
Is your technical content development organization considering a move to structured authoring and/or DITA (Darwin Information Typing Architecture)? This presentation provides a high-level introduction to what DITA is--and what the benefits of moving to DITA are. DITA is an excellent solution for many--but not all--organizations and projects. This introduction can help you begin to understand why DITA may or may not be a good solution for you.
Benefits of using software design patterns and when to use design patternBeroza Paul
Benefits of using design patterns
Drawbacks of using design patterns
When to use design singleton pattern?
When to use design builder pattern?
When to use design facade pattern?
When to use design adapter pattern?
When to use design decorator pattern?
When to use design state pattern?
When to use design strategy pattern?
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
11. Software development
continuously and iteratively
Communicating and sharing
knowledge and experience
Formalising
information
Collaborating
People needs to talk and read in order to
understand requirements, specifications,
and other informal artifacts (meeting
notes, requirements, draft designs...) with
the goal of producing more formal
artifacts (code, formal specifications,
models...).
To collaborate, we need to share
information and artifacts.
Different kinds of artifacts are typically
produced and presented using different
tools and environments.
Informal and personal communication is
usually the best, but sooner or later, you
may need to produce documentation in
order to preserve, communicate and share
the knowledge you have acquired about
your project or software system.
12. Knowledge acquisition
▪ Most (but not all) knowledge is in the head of experts
▪ therefore it must be shared with non-experts
▪ Experts have vast amounts of knowledge
▪ therefore there is a need to focus on essentials
▪ Each expert doesn't know everything
▪ therefore experts must interact with each other
▪ Experts have a lot of tacit knowledge
▪ therefore they know more than they realize
▪ Experts are very busy and valuable people
▪ therefore capturing techniques must be non-intrusive
▪ Knowledge has a "shelf life”
▪ therefore it evolves, is created, and must be maintained
13. Understanding = docs + conversations
▪ Knowledge acquisition requires contents and context.
▪ Knowledge can be partially preserved as documents, but
not all.
▪ Understanding the knowledge is the issue.
▪ Understanding needs documentation and conversation.
(documents only gathers 15-25% of all the knowledge of
complex systems)
19. Software documentation
▪ Documentation is good but hard and costly
▪ Every project benefits with good documentation but it has
costs, so we need to decide which is the“right dose” of
documentation that guarantees the success of our
project.
▪ Each project is a case.
▪ Simple and small projects may require few
documentation.
▪ For example, for reusable software, good documentation
is crucial because nobody reuses what they don’t know,
don’t understand, or what seems to be difficult to reuse.
26. Core practices
▪ Create simple documents, but just simple enough
• A minimalist document must be brief, it shouldn’t contain everything,
but just the enough information that fulfils its purpose and the
intended audience.
• The simplicity and understandability of contents must be evaluated by
the readers.
▪ Create several documents at once
• To represent all the aspects of a system, and to serve all the
audiences and purposes, it is often necessary to use different
documents, which when edited in parallel and properly cross-
referenced help writers on “dumping” their knowledge more
effectively, as it avoids to switch contexts.
27. Core practices
▪ Publish documents publicly
• Publicly available documents, published for everyone to see, supports
knowledge transfer, and improves communication and understanding.
• The feedback increases and the quality of documents quickly
improves.
▪ Document and update only when needed
• To be cost-effective, documents should be created and refined
iteratively only when needed, not when desired.
▪ Reuse documentation
• Reuse contents and structure of existing documentation to improve
the productivity and quality of the documentation.
• Reusable contents must be modular, closed, and readable in any
order.
28. Additional practices
▪ Use simple tools
• The usage of simple tools help focus more on the contents, rather
than on the presentation.
▪ Define and follow documentation standards
• Writers must agree and follow a common set of documentation
conventions and standards on a project.
▪ Document it, to understand it
• To document helps on formalising ideas focused on single aspects, in
isolation from many others.
30. Internal vs External Documentation
▪ Internal documentation
• is limited to low-level, textual explanations, usually included in source
code comments;
▪ Higher-level external documentation
• capable of capturing the components and connectors of an
architecture and the interactions of cooperating classes;
• the consistency between external documents and source code can be
difficult to maintain as the system evolves over time.
31. A lot of documents…
private
components
domain
design
overview
protected view
implementation
public view
user
developer
maintainer
framework overview,
snapshots, images
implementations
type and operation
specifications
examples
cookbooks, recipes
design patterns
technical and application
architecture
interface and interaction
contracts
refinements
use cases, scenarios
detailed use cases
design notebooks
collaborations, roles
class interfaces
abstract concrete contents reference
35. Heterogeneous documents
▪ Software artifacts can be categorized in source code,
models, and documents (free text, structured text,...)
▪ Useful external documents often combine contents from
different kinds of software artifacts, which are here called
heterogeneous software documents.
36. Key issues
▪ Heterogeneous software documents issues include:
• the preservation of the semantic consistency between
the different kinds of contents (such as informal
documents and source code);
• the lack of appropriate documentation environments;
• contents integration;
• contents synchronisation.
▪ All these issues have a strong impact on the cost of
producing, evolving, and maintaining the documentation.
42. Weaki: key features
▪ Transclusion of code
▪ The inclusion of part or all of an electronic document into one or more
other documents by reference
▪ Structure emergence
▪ While editing pages, recurrent common structures will emerge. These
structures can at anytime be captured as reusable types.
▪ Homogeneity
▪ Types are wiki pages. All known wiki metaphors are applicable.
▪ Scaffolding
▪ Whenever someone wants to create a new page of a particular type, the
wiki automatically fills it with an initial skeleton, derived from that type's
structure.
▪ Structured views
▪ Structured viewing filters out content not compliant to the page’s type.
This provides a consistent view of every page of the same type.
43. Weaki: key features
▪ Content Assist
▪ The creation of new content is assisted with context aware
suggestions, while editing a typed page.
▪ Global time labels
▪ Labelling a moment in time. View the entire wiki state at that
moment.
▪ Type awareness
▪ Types evolve. Pages evolve. Their level of compliance varies.
Awareness of this metric allows balancing evolution vs. type
adequacy.
▪ Team awareness
▪ The neighbourhood consists of wiki contributors (authors, editors,
even readers) that inhabits the same pages as you. Being aware
of who they are, nurtures constructive conversations.