This document outlines an organization's journey to enable independent teams by decoupling data flows. It began with all development done by one centralized team with tight dependencies. To scale, the organization split into multiple teams but they remained tightly coupled through a shared database. The organization is now moving to an event-driven architecture with decoupled services communicating asynchronously through event streams. This will allow independent teams, improved scalability, and reduced operational costs by removing dependencies between services. Challenges include ensuring idempotency and handling events at least once.
This report contains data and insights about web3 development happening across the Ethereum, Polygon, Arbitrum, Optimism, and Solana ecosystems. It features data from Alchemy, CoinMarketCap, DappRadar, Dune, Etherscan, Github, and the Internet Archive. This data provides insight into NFTs, DeFi, DAOs, social, and other web3 verticals and the growing interest and development in the web3 ecosystem. Developer data includes insights about libraries like Ethers.js and Web3.js, verified smart contract deployment, and decentralized applications. Featured projects include Alchemy, Chainlink, Ethereum Name Service, Farcaster, The Graph, Lens Protocol, RainbowKit, Uniswap, WalletConnect, and many more.
This report contains data and insights about web3 development happening across the Ethereum, Polygon, Arbitrum, Optimism, and Solana ecosystems. It features data from Alchemy, CoinMarketCap, DappRadar, Dune, Etherscan, Github, and the Internet Archive. This data provides insight into NFTs, DeFi, DAOs, social, and other web3 verticals and the growing interest and development in the web3 ecosystem. Developer data includes insights about libraries like Ethers.js and Web3.js, verified smart contract deployment, and decentralized applications. Featured projects include Alchemy, Chainlink, Ethereum Name Service, Farcaster, The Graph, Lens Protocol, RainbowKit, Uniswap, WalletConnect, and many more.
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
[21 크리에이티브 디렉터 세미나] 발표자료입니다.
PM과 함께 일하는 디자이너, PM 역할을 겸해야 하는 디자이너분들을 대상으로 PM 직군이 조직과 제품의 성장을 위해 어떤 고민과 노력을 하고 있는지 공유합니다.
우아한형제들에서 B마트와 배민스토어를 만드는 B마트서비스팀의 사례가 함께 담겨있습니다.
Workday Talent and Performance gives you detailed insight into your workforce to drive organizational growth.
• Tap into the power of your workforce: Use employee data—such as performance, skills, and career interests—to realize the full potential of your organization and your people.
• Lead change: Understand your workers’ skills and capabilities, and inform global talent planning to achieve strategic business objectives.
• Develop your workforce: Fill gaps with top internal, external, and
contingent candidates. Easily assess individuals, recruit, and take
action—all from your browser or mobile device.
• Engage your people: Provide continuous and periodic feedback as well as regular check-ins to drive engagement and enhance the strength of your workforce.
Customer-centricity is the new imperative, but most organizations are not prepared to transform the way they work to deliver a relevant, personalized customer experience at scale. Designed for those who have been exposed to Journey Mapping, this interactive workshop will share Accenture’s Customer Journey Management framework for guiding the omni-channel customer experience with agility and at scale. During the session you will assess your organization’s design, governance and operating model dimensions to identify capability gaps in delivering on your vision of customer-centricity.
In a working session you will prioritize the gaps in your organization’s capabilities to implement the Customer Journey Management framework. The workshop will help you visualize how to manage the dramatic increase in data, segments, content, collaboration, and compliance that come with high-fidelity journey mapping and omni-channel marketing. We will discuss your specific challenges, as well as real world examples of operating model innovations from companies across industries and levels of maturity. This session will help you prepare your company to identify and respond to customer experience opportunities with new levels of agility and scale.
Large companies see an opportunity to replace expensive legacy data warehouse applications with Big Data technologies. But how realistic is the notion of switching from tried and true data warehouse implementations to something that's still maturing, and what are the pitfalls? What will a business user need to learn in order to adapt to the new platform?
These past few years have accelerated changes and disrupted how companies lead, enable, empower, and engage around communications. Leaders in corporate communications, human resources, and IT are all tackling new responsibilities and challenges in connecting with employees who no longer regularly share the same workspace and may be overwhelmed with increased communication volume, velocity, and variety.
We have been catapulted into a new employee experience paradigm, and it can be challenging to get our bearings. We now need to plan for the future, but it can be hard to consider the future when the present is filled with unique and pressing communication challenges. Yet, the future of communication may hold answers to problems we are experiencing today; it may inspire us to change and, perhaps most importantly, enable us to understand how to prepare to embrace a better one.
We have gathered leading industry experts on employee communications, intranets, the digital workplace, and employee experience to help us navigate the years ahead. Join James Robertson, a Step Two global thought leader and author on digital employee experience, Suzie Robinson, author of the popular ClearBox employee experience platforms report, and Richard Harbridge, a celebrated Microsoft MVP, as they share insight on how to better understand, leverage, and prepare for the future of employee communications.
A presentation of my I.T. company SSDesign.
We provide I.T. consultants (long term - 12 months or more), I.T. Staff Augmentation (1 - 6 months), Mobile App Development, Enterprise Architecture and Software Development and Application Support Services (local and remote).
Our rates are highly competitive and our services are beyond excellent.
Craig Geswindt
SSDesign CC
http://www.ssdesign.co.za
Skype: the.business1
Quotes and info from the SOSV Climate Tech Summit 2021, including Bill Gates, Vinod Khosla, Tony Fadell, Bill Gross, Amazon Climate Pledge Fund, Prelude Ventures, Form Energy, Pivot Bio, LanzaTech and many more.
Download: https://bit.ly/sosv-climate-insights
Imagine an application that has a web and a mobile, IOS and Android, or that your API is consumed by similar frontends from totally different teams. The functionalities they provide are distinct, hence the need for distinct sets of data and functions. You might think that the solution for this is having an “as generic as possible” backend for all UI’s. From my experience, this kind of backend leads to big issues in matters of performance, entangled user experience as well as extra and unnecessary communication for the development teams in order to align and meet their needs. Fortunately, there is a promising set of approaches taking the stage as they are created with the intention to optimize how front-end applications collaborate with back-ends: BFF (Backend-For-Frontend) pattern and GraphQL. Given these two approaches, which one is the right to consider? Join me in a talk where we will discuss the two approaches, underline both their good and bad sides, and determine which you should consider as the backend technology for your frontend application.
Why Businesses Must Adopt NetSuite ERP Data MigrationJade Global
Business Should look at NetSuite for Seamless ERP Data Migration. Follow these Data migration Best Practices & Strategies for ERP implementation. Read Now!
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
[21 크리에이티브 디렉터 세미나] 발표자료입니다.
PM과 함께 일하는 디자이너, PM 역할을 겸해야 하는 디자이너분들을 대상으로 PM 직군이 조직과 제품의 성장을 위해 어떤 고민과 노력을 하고 있는지 공유합니다.
우아한형제들에서 B마트와 배민스토어를 만드는 B마트서비스팀의 사례가 함께 담겨있습니다.
Workday Talent and Performance gives you detailed insight into your workforce to drive organizational growth.
• Tap into the power of your workforce: Use employee data—such as performance, skills, and career interests—to realize the full potential of your organization and your people.
• Lead change: Understand your workers’ skills and capabilities, and inform global talent planning to achieve strategic business objectives.
• Develop your workforce: Fill gaps with top internal, external, and
contingent candidates. Easily assess individuals, recruit, and take
action—all from your browser or mobile device.
• Engage your people: Provide continuous and periodic feedback as well as regular check-ins to drive engagement and enhance the strength of your workforce.
Customer-centricity is the new imperative, but most organizations are not prepared to transform the way they work to deliver a relevant, personalized customer experience at scale. Designed for those who have been exposed to Journey Mapping, this interactive workshop will share Accenture’s Customer Journey Management framework for guiding the omni-channel customer experience with agility and at scale. During the session you will assess your organization’s design, governance and operating model dimensions to identify capability gaps in delivering on your vision of customer-centricity.
In a working session you will prioritize the gaps in your organization’s capabilities to implement the Customer Journey Management framework. The workshop will help you visualize how to manage the dramatic increase in data, segments, content, collaboration, and compliance that come with high-fidelity journey mapping and omni-channel marketing. We will discuss your specific challenges, as well as real world examples of operating model innovations from companies across industries and levels of maturity. This session will help you prepare your company to identify and respond to customer experience opportunities with new levels of agility and scale.
Large companies see an opportunity to replace expensive legacy data warehouse applications with Big Data technologies. But how realistic is the notion of switching from tried and true data warehouse implementations to something that's still maturing, and what are the pitfalls? What will a business user need to learn in order to adapt to the new platform?
These past few years have accelerated changes and disrupted how companies lead, enable, empower, and engage around communications. Leaders in corporate communications, human resources, and IT are all tackling new responsibilities and challenges in connecting with employees who no longer regularly share the same workspace and may be overwhelmed with increased communication volume, velocity, and variety.
We have been catapulted into a new employee experience paradigm, and it can be challenging to get our bearings. We now need to plan for the future, but it can be hard to consider the future when the present is filled with unique and pressing communication challenges. Yet, the future of communication may hold answers to problems we are experiencing today; it may inspire us to change and, perhaps most importantly, enable us to understand how to prepare to embrace a better one.
We have gathered leading industry experts on employee communications, intranets, the digital workplace, and employee experience to help us navigate the years ahead. Join James Robertson, a Step Two global thought leader and author on digital employee experience, Suzie Robinson, author of the popular ClearBox employee experience platforms report, and Richard Harbridge, a celebrated Microsoft MVP, as they share insight on how to better understand, leverage, and prepare for the future of employee communications.
A presentation of my I.T. company SSDesign.
We provide I.T. consultants (long term - 12 months or more), I.T. Staff Augmentation (1 - 6 months), Mobile App Development, Enterprise Architecture and Software Development and Application Support Services (local and remote).
Our rates are highly competitive and our services are beyond excellent.
Craig Geswindt
SSDesign CC
http://www.ssdesign.co.za
Skype: the.business1
Quotes and info from the SOSV Climate Tech Summit 2021, including Bill Gates, Vinod Khosla, Tony Fadell, Bill Gross, Amazon Climate Pledge Fund, Prelude Ventures, Form Energy, Pivot Bio, LanzaTech and many more.
Download: https://bit.ly/sosv-climate-insights
Imagine an application that has a web and a mobile, IOS and Android, or that your API is consumed by similar frontends from totally different teams. The functionalities they provide are distinct, hence the need for distinct sets of data and functions. You might think that the solution for this is having an “as generic as possible” backend for all UI’s. From my experience, this kind of backend leads to big issues in matters of performance, entangled user experience as well as extra and unnecessary communication for the development teams in order to align and meet their needs. Fortunately, there is a promising set of approaches taking the stage as they are created with the intention to optimize how front-end applications collaborate with back-ends: BFF (Backend-For-Frontend) pattern and GraphQL. Given these two approaches, which one is the right to consider? Join me in a talk where we will discuss the two approaches, underline both their good and bad sides, and determine which you should consider as the backend technology for your frontend application.
Why Businesses Must Adopt NetSuite ERP Data MigrationJade Global
Business Should look at NetSuite for Seamless ERP Data Migration. Follow these Data migration Best Practices & Strategies for ERP implementation. Read Now!
This presentation walks through how easy it is to integrate your MS Dynamics NAV with Salesforce.com by using our on demand data integration platform RapidiOnline. www. rapidionline.com
Simply Business is a leading insurance provider for small business in the UK and we are now growing to the USA. In this presentation, I explain how our data platform is evolving to keep delivering value and adapting to a company that changes really fast.
Running Business Analytics for a Serverless Insurance Company - Joe Emison & ...Daniel Zivkovic
Take a peek into the future of IT - beyond Serverless Software Development, when Serverless becomes a way to run Internal IT.
When ServerlessToronto.org invited Joe Emison - AWS Serverless Hero, we expected to see how he "knocked down the wall" between AWS & Google Clouds (to query Amazon DynamoDB from Google BigQuery) using the Fivetran ELT tool, but we learned so much more... and you will too: https://youtu.be/GK5Ivm6EOlI
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area on the exam. https://www.pass4sureexam.com/70-461.html
How Schneider Electric Transformed Front-office Operations With Real-time Dat...Informatica Cloud
Many of the world’s corporations use Salesforce.com to drive their front office, and while most experience success others encounter roadblocks and difficulties as their Salesforce footprint grows. Countless customers suffer from a lack of up-to-date information which impedes business progress and stifles end-user productivity.
This presentation describes how Schneider Electric SE, a multinational corporation that specializes in electricity distribution, automation management and components product for energy management, used Informatica Cloud to improve the operational efficiency of their Salesforce.com front-office.
It also details how Schneider Electric was able to make key data readily available to Sales teams in real-time, on the right device, to ensure the success of a highly visible front-office integration initiative.
To watch this presentation visit : http://youtu.be/kU2A1xMvaI8
For a 30 day free trial of Informatica Cloud visit:
http://www.informaticacloud.com/trial
Improving Agility While Widening Profit Margins Using Data VirtualizationDenodo
The deluge of information companies face today is not manageable using traditional data integration approaches which prevent fast and rich data flow throughout the organization. This is demonstrated through IT’s struggle to obtain up-to-date information for the business, as views and reports of company operations become outdated before they get delivered.
Data virtualization can complement and boost data warehousing and ETL technologies by building a sort of "Logical Data Warehouse" abstraction layer, which facilitates broader and faster data integration across the enterprise. In this presentation you can learn how to spend less time manually reconciling data between silos and help your company improve performance and business agility from order to cash. Mike Ferguson will provide you the latest insights about this technology and Mark Pritchard shows some data virtualization use cases.
The next generation user experience should move to customer engagement zones along their preferred channels with desired action to outcome approaches. With scores of information ranging from inventory to inquiry, weather to warehouse alerts, product to promotion info at disposal, enterprise digitization can create value at every customer touch point. Attendees witnessed the manifestation of TCS’ Thought Leadership in the Game of Retail.
Is your big data journey stalling? Take the Leap with Capgemini and ClouderaCloudera, Inc.
Transitioning to a Big Data architecture is a big step; and the complexity of moving existing analytical services onto modern platforms like Cloudera, can seem overwhelming.
Innovate with the data you have with UiPath and Snowflake.pdfCristina Vidu
Collect data from virtually anywhere. Process structured and unstructured data from sources, like legacy systems and local system files, then store it in Snowflake. In this session we will look at how to process PDF documents and load information into Snowflake using UiPath Document Understanding and how to get data from Snowflake to use in in UiPath automations
This session is especially for data analysts and other Snowflake users interested in learning more about how UiPath can be applied to common data challenges. We’ll discuss several use cases including data gathering, preparation, and using UiPath to extend integration between Snowflake and 3rd party applications.
📕 During the meetup we will cover:
Typical use cases for RPA and Snowflake
How the native integration for UiPath and Snowflake works
How to build an automation using the UiPath / Snowflake integration, including a live example and demo
👨💻 Speakers:
Mo Roy, Senior Partner Engineer, Technology Alliances @UiPath
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
Transforming applications built with traditional messaging solutions such as TIBCO, MQ and Solace to be scalable, reliable and ready for the move to cloud
How can applications built with traditional messaging technologies like TIBCO, Solace and IBM MQ be modernised and be made cloud ready? What are the advantages to Event Streaming approaches to pub/sub vs traditional message queues? What are the strengeths and weaknesses of both approaches, and what use cases and requirements are actually a better fit for messaging than Kafka?
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
2. About us
We’re on a mission to build MENA’s most effective Product-
Led organization
● Largest delivery + quick commerce player in MENA
● Active in 9 markets
● Headquartered in Dubai
● 2 tech hubs
4. Let’s take a step back and start at the beginning
● Founded in 2004 in Kuwait
● Fast paced startup, development all done by one team
● Everybody know all moving pieces, focus is on fast delivery
5. Let’s take a step back and start at the beginning
Business team
request
Delivery team
Business team
request
Business team
request
6. The organization grew
Business team
request
Delivery team
Business team
request
Business team
request
Delivery team
Delivery team
8. Delivery Hero enters the picture
In 2016, talabat became part of the larger delivery hero group
In 2019, we started our journey into a tech and product organization
9. Our journey timeline
2019
Goal:
Move from “delivery team” model
and building a tech & product
organisation
Milestones:
- Team size is at 80
- We process 140 orders per
minute
2020
Goal:
Transform towards self-organised,
empowered teams model
Milestones:
- Team grows to 210
- We process 300 orders per
minute
2021
Goal:
Building the right thing, in the right
way, where speed is enabled by
engineering culture and tech
excellence
Milestones:
- Team grows to 250 (mid year)
- We process 500 orders per
minute
10. Architecture evolves as you grow
Vendor
Management
Consumer
API
CMS Management
Main
Database
iOS
Android
Web
Service A Service B
11. Architecture evolves integrate with global services
Consumer
API
Main
Database
Order
Transmission
Logistics
Restaurant
tools
Driver app
● Order Transmission dispatches orders to global
services
● Writes update from global services back to main
database that downstream services have the data
12. Our journey timeline
2019
Goal:
Move from “delivery team” model
and building a tech & product
organisation
Milestones:
- Team size is at 80
- We process 140 orders per
minute
2020
Goal:
Transform towards self-organised,
empowered teams model
Milestones:
- Team grows to 210
- We process 300 orders per
minute
2021
Goal:
Building the right thing, in the right
way, where speed is enabled by
engineering culture and tech
excellence
Milestones:
- Team grows to 250 (mid year)
- We process 500 orders per
minute
13. Functional scope is growing
Consumer
API
Main
Database
Order
Transmission
Logistics
Restaurant
tools
Driver app
Rewards
Order
information
Reorders
14. Functional scope is growing
● Orders are placed in main database
● Record update in database to distribute status update
○ Read on demand or DB polling on read replica
● Tight coupling on DB schema between teams
● Scheduled time for deployments due to DDL changes
● Very limited scalability
16. Our journey timeline
2019
Goal:
Move from “delivery team” model
and building a tech & product
organisation
Milestones:
- Team size is at 80
- We process 140 orders per
minute
2020
Goal:
Transform towards self-organised,
empowered teams model
Milestones:
- Team grows to 210
- We process 300 orders per
minute
2021
Goal:
Building the right thing, in the right
way, where speed is enabled by
engineering culture and tech
excellence
Milestones:
- Team grows to 250 (mid year)
- We process 500 orders per
minute
17. A closer look on our dependencies
● Most services need order and vendor data to
function
● Teams need to alter database schemas to
achieve their objectives
● Most code is dependent on the existence of
the monolithic database
● A bit more philosophical, but the database
was our orchestrator
Adding more people to the team didn’t make us
faster, but made the problem more pressing
18. What we are aiming for
● Allow our teams to build services with fewer dependencies and act independently
● Enable operational speed by the decoupling of functionalities
● Allowing our organization to scale in order volume and team size
● Reducing operational cost
19. Rethinking scalability
● Database coupling hinder speed
● HTTP request fanout doesn’t scale
● Queues work, but it needs one for each service
● Data is still required
● Coupling between teams and services need to be resolved
20. Some principles for services
● Low coupling, high cohesion
● No service can depend on another service for data availability, functionality or
uptime
● Events are first class citizens
● Choreography over orchestration
The full list is actually 10, but we don’t need all of them right now
21. The journey to decoupling information
Consumer
API
Main
Database
Order
Transmission
Central service
integration
Rewards
Order information
Nexus order
service
Orders
(shadow mode)
Orders
Database Compare Orders
To ensure data
consistency
Reorders User service
22. The journey to decoupling information
Consumer
API
Main
Database
Order
Transmission
Central service
integration
Rewards
Order information
Nexus order
service
Orders
Orders
Database
Reorders User service
Write orders
For compatibility
23. The journey to decoupling information
Consumer
API
Main
Database
Order
Transmission
Central service
integration
Rewards
Order information
Nexus order
service
Orders
Orders
Database
Reorders User service
Write orders
For compatibility
Write orders
For compatibility
Order Stream
● Create fat events that
satisfy needs of
downstream services
● Use event schema
registry to ensure all
events conform to
schema
Order information
24. The journey to decoupling information
Consumer
API
Main
Database
Order
Transmission
Central service
integration
Rewards
Order
information
Nexus order
service
Orders
Orders
Database
Reorders User service
Write orders
For compatibility
Write orders
For compatibility
Order Stream
Order
information
25. Why this matters
● Allow our teams to build services with fewer dependencies and act
independently
○ Events have full data, reducing dependencies
○ If others teams need data, it will be provided on a stream and can be cached
locally
● Enable operational speed by the decoupling of functionalities
○ All services operate fully independent, not central DDL couples them
○ Event schema ensure structure while enabling decoupling
○ If a service is unavailable, events can be consumed later
26. Why this matters II
● Allowing our organization to scale in order volume and team size
○ No single database with all data is required
○ Downstream services can be added independently without adding fanout
http requests at the origin
● Reducing operational cost
○ Local database can be smaller
○ Data retention can be optimized for each service
27. New Challenges
● Idempotency handling is required
● At least once semantics are more complex
● Event Schemas need more upfront thinking in design to make the best out of the
solution
28. Future
● Each vertical can have their own order service, creating multiple producers of
events on the same stream
● The source of truth will be the stream
● Disaster recovery will become simplified in setup by reducing data dependencies
Tell a bit about the background
8 years in the region
Talabat
Careem
Dubizzle
Today our team is a multi national team
that grew from humble beginnings with a few engineers in Kuwait to around 75 in 2019
Experience massive growth between 2019 and 2022
Like many companies, it worked like this
And as the organiation grew, it still worked like this
High level architecture.
All centered around a monolithic database
Restaurant data and content into the DB, read on the consumer api
Team size grows, it becomes harder to work in the single codebase
Service are all around the world a big thing
Services start to emerge, but data still coupled to the main database
Easier to work on smaller code
Load is still on a database, mitigated by creating read replicas
Tight coupling on database level
But let’s have a look at a different view,
Database based synchronization and update mechanism
Polling on main DB by order transmission, updates existing order records
It works and is simple
But
Single point of failure
Doesn’t scale well
Everybody needs to have in depth knowledge of the whole system
Tight coupling
Functional scope grows
Order information still centralized
Updates distributed to via database updates
A complex connected system has connections between areas. They define how well a team can work independently
Each column is a team and each card is an ask from a different team
Commitments are hard to keep when extra work is needed to support others
You start to think about why we get more and more dependencies, while progress starts to feel slower
Point on the right center is orders
The big spot in upper center is vendor information
Those are actually from the dubizzle times
Full list
Low coupling, high cohesion
No service can dependent on antoher for uptime, functionality or data availability
On transactions can span at most one service
No vertical can impact the stability of another
Bounded contexts are defined by business realms and not CRUD apis
Choregraphy over orchestration
Events are first class citizens
Create a new service that can process orders, but keep the old one
Process orders as before, updates to services as still in the same order object
The service has no longer access to all types of information an order object might need, so the model need to contain all this data for each order
Examples are user information
Vendor address or opening hours
Driver details
Create a new service that can process orders, but keep the old one
Process orders as before, updates to services as still in the same order object
The service has no longer access to all types of information which an order object might need, so the model need to contain all this data for each order
Examples are user information
Vendor address or opening hours
Driver details
Avoid requests from Transmission to user service to fetch user data
Create a new service that can process orders, but keep the old one
Process orders as before, updates to services as still in the same order object
The service has no longer access to all types of information which an order object might need, so the model need to contain all this data for each order
Examples are user information
Vendor address or opening hours
Driver details
Avoid requests from Transmission to user service to fetch user data
Create a new service that can process orders, but keep the old one
Process orders as before, updates to services as still in the same order object
The service has no longer access to all types of information which an order object might need, so the model need to contain all this data for each order
Examples are user information
Vendor address or opening hours
Driver details
Avoid requests from Transmission to user service to fetch user data