Shaping serverless architecture with domain driven design patternsAsher Sterkin
This document discusses using Domain-Driven Design (DDD) patterns to structure serverless applications. It introduces DDD concepts like bounded contexts, aggregates, repositories, and CQRS. Bounded contexts separate domains into cohesive models that are loosely coupled. Aggregates define transactional boundaries and ensure data integrity. Repositories provide storage and retrieval of aggregates. CQRS separates commands and queries using different data models. Applying these DDD patterns can help organize serverless applications as they grow in complexity.
This document discusses applying domain-driven design patterns to serverless architecture. It begins by introducing the speaker and their background. It then provides an overview of serverless architecture and some of its benefits. The document goes on to discuss challenges that can arise with serverless applications as they grow in complexity, and suggests that organizing principles like domain-driven design patterns are needed. It proceeds to cover domain-driven design concepts like bounded contexts, aggregates, repositories, and CQRS, and provides examples of how they could be applied to serverless architecture. It concludes by discussing some interim conclusions, including that serverless is a new paradigm that requires principles to tame complexity, and that domain-driven design offers useful patterns for this purpose.
Developing cloud serverless components in Python: DDD PerspectiveAsher Sterkin
This document discusses developing serverless components in Python from a domain-driven design perspective. It begins with introductions and background on serverless architecture and domain-driven design. It then demonstrates implementing a sample cargo tracking system using these approaches in Python and AWS Lambda. Key points made include mapping the domain model and use cases to serverless functions and services, using AWS Step Functions to coordinate the functions, and the Matte toolkit to generate infrastructure. The document considers alternatives and argues this architecture could scale efficiently while keeping functions focused. It closes by speculating on future serverless-native architectures beyond what exists today.
This document discusses configuration management at Deutsche Bahn, a German railway company. It describes their move to a CMM Level 4 managed environment for their data centers. Key points:
- They developed a solution using configuration descriptors to fully describe their platform, applications, and scenarios.
- A configuration data hub stores all configuration information using Structr and Neo4j to manage complex dependencies and allow real-time searches/updates.
- This provides Deutsche Bahn a single, self-contained system to manage the full lifecycle of their SOA applications across distributed infrastructure.
The domain model is an abstraction of the problem domain that your system supports. It contains the objects and operations that are crucial to your system and its users. Therefore the design of the domain model deserves the utmost care and attention. In this session you will be introduced to Domain-Driven Design and you will learn how to put Domain-Driven Design (DDD) into practice. We will explore how to apply DDD on tactical level to design a rich domain model that encapsulates behaviour, protects its invariants and can be tested in isolation from its runtime environment
The Lyft data platform: Now and in the futuremarkgrover
- Lyft has grown significantly in recent years, providing over 1 billion rides to 30.7 million riders through 1.9 million drivers in 2018 across North America.
- Data is core to Lyft's business decisions, from pricing and driver matching to analyzing performance and informing investments.
- Lyft's data platform supports data scientists, analysts, engineers and others through tools like Apache Superset, change data capture from operational stores, and streaming frameworks.
- Key focuses for the platform include business metric observability, streaming applications, and machine learning while addressing challenges of reliability, integration and scale.
Rail Ticketing Assistance from the Graph Way, KCOMNeo4j
This document summarizes the challenges faced by a transportation ticketing system and how a graph database solution using Neo4j was implemented to address these challenges. The transportation system had to handle flexible bookings, integration of different data sources, and complex reporting and demand management needs. Testing showed Neo4j could meet the performance requirements, handling over 10,000 transactions per second with response times under 3 milliseconds. Future plans include further load testing and upgrading to new Neo4j features to improve availability.
Shaping serverless architecture with domain driven design patternsAsher Sterkin
This document discusses using Domain-Driven Design (DDD) patterns to structure serverless applications. It introduces DDD concepts like bounded contexts, aggregates, repositories, and CQRS. Bounded contexts separate domains into cohesive models that are loosely coupled. Aggregates define transactional boundaries and ensure data integrity. Repositories provide storage and retrieval of aggregates. CQRS separates commands and queries using different data models. Applying these DDD patterns can help organize serverless applications as they grow in complexity.
This document discusses applying domain-driven design patterns to serverless architecture. It begins by introducing the speaker and their background. It then provides an overview of serverless architecture and some of its benefits. The document goes on to discuss challenges that can arise with serverless applications as they grow in complexity, and suggests that organizing principles like domain-driven design patterns are needed. It proceeds to cover domain-driven design concepts like bounded contexts, aggregates, repositories, and CQRS, and provides examples of how they could be applied to serverless architecture. It concludes by discussing some interim conclusions, including that serverless is a new paradigm that requires principles to tame complexity, and that domain-driven design offers useful patterns for this purpose.
Developing cloud serverless components in Python: DDD PerspectiveAsher Sterkin
This document discusses developing serverless components in Python from a domain-driven design perspective. It begins with introductions and background on serverless architecture and domain-driven design. It then demonstrates implementing a sample cargo tracking system using these approaches in Python and AWS Lambda. Key points made include mapping the domain model and use cases to serverless functions and services, using AWS Step Functions to coordinate the functions, and the Matte toolkit to generate infrastructure. The document considers alternatives and argues this architecture could scale efficiently while keeping functions focused. It closes by speculating on future serverless-native architectures beyond what exists today.
This document discusses configuration management at Deutsche Bahn, a German railway company. It describes their move to a CMM Level 4 managed environment for their data centers. Key points:
- They developed a solution using configuration descriptors to fully describe their platform, applications, and scenarios.
- A configuration data hub stores all configuration information using Structr and Neo4j to manage complex dependencies and allow real-time searches/updates.
- This provides Deutsche Bahn a single, self-contained system to manage the full lifecycle of their SOA applications across distributed infrastructure.
The domain model is an abstraction of the problem domain that your system supports. It contains the objects and operations that are crucial to your system and its users. Therefore the design of the domain model deserves the utmost care and attention. In this session you will be introduced to Domain-Driven Design and you will learn how to put Domain-Driven Design (DDD) into practice. We will explore how to apply DDD on tactical level to design a rich domain model that encapsulates behaviour, protects its invariants and can be tested in isolation from its runtime environment
The Lyft data platform: Now and in the futuremarkgrover
- Lyft has grown significantly in recent years, providing over 1 billion rides to 30.7 million riders through 1.9 million drivers in 2018 across North America.
- Data is core to Lyft's business decisions, from pricing and driver matching to analyzing performance and informing investments.
- Lyft's data platform supports data scientists, analysts, engineers and others through tools like Apache Superset, change data capture from operational stores, and streaming frameworks.
- Key focuses for the platform include business metric observability, streaming applications, and machine learning while addressing challenges of reliability, integration and scale.
Rail Ticketing Assistance from the Graph Way, KCOMNeo4j
This document summarizes the challenges faced by a transportation ticketing system and how a graph database solution using Neo4j was implemented to address these challenges. The transportation system had to handle flexible bookings, integration of different data sources, and complex reporting and demand management needs. Testing showed Neo4j could meet the performance requirements, handling over 10,000 transactions per second with response times under 3 milliseconds. Future plans include further load testing and upgrading to new Neo4j features to improve availability.
The journey toward a self-service data platform at Netflix - sf 2019Karthik Murugesan
The Netflix data platform is a massive-scale, cloud-only suite of tools and technologies. It includes big data tech (Spark and Flink), enabling services (federated metadata management), and machine learning support. But with power comes complexity. Kurt Brown explains how Netflix is working toward an easier, "self-service" data platform without sacrificing any enabling capabilities.
Domain driven design and model driven developmentDmitry Geyzersky
This document discusses domain driven design and model driven development. It introduces ontology and how it relates to domain driven design. The document outlines key domain driven design building blocks like the domain model, entities, value objects, repositories, services, and aggregates. It also discusses challenges of domain driven design and code generation techniques.
Bighead: Airbnb's end-to-end machine learning platform
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python, Spark, and Kubernetes. The components include a lifecycle management service, an offline training and inference engine, an online inference service, a prototyping environment, and a Docker image customization tool. Each component can be used individually. In addition, Bighead includes a unified model building API that smoothly integrates popular libraries including TensorFlow, XGBoost, and PyTorch. Each model is reproducible and iterable through standardization of data collection and transformation, model training environments, and production deployment. This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adopted in Airbnb and we have variety of models running in production. We plan to open source Bighead to allow the wider community to benefit from our work.
Speaker: Andrew Hoh
Andrew Hoh is the Product Manager for the ML Infrastructure and Applied ML teams at Airbnb. Previously, he has spent time building and growing Microsoft Azure's NoSQL distributed database. He holds a degree in computer science from Dartmouth College.
Building Intelligent Applications, Experimental ML with Uber’s Data Science W...Databricks
In this talk, we will explore how Uber enables rapid experimentation of machine learning models and optimization algorithms through the Uber’s Data Science Workbench (DSW). DSW covers a series of stages in data scientists’ workflow including data exploration, feature engineering, machine learning model training, testing and production deployment. DSW provides interactive notebooks for multiple languages with on-demand resource allocation and share their works through community features.
It also has support for notebooks and intelligent applications backed by spark job servers. Deep learning applications based on TensorFlow and Torch can be brought into DSW smoothly where resources management is taken care of by the system. The environment in DSW is customizable where users can bring their own libraries and frameworks. Moreover, DSW provides support for Shiny and Python dashboards as well as many other in-house visualization and mapping tools.
In the second part of this talk, we will explore the use cases where custom machine learning models developed in DSW are productionized within the platform. Uber applies Machine learning extensively to solve some hard problems. Some use cases include calculating the right prices for rides in over 600 cities and applying NLP technologies to customer feedbacks to offer safe rides and reduce support costs. We will look at various options evaluated for productionizing custom models (server based and serverless). We will also look at how DSW integrates into the larger Uber’s ML ecosystem, e.g. model/feature stores and other ML tools, to realize the vision of a complete ML platform for Uber.
This document discusses recommendations and machine learning at Netflix. It provides an overview of:
- How Netflix provides personalized recommendations on member homepages to help them find content to watch.
- Netflix's experimentation cycle of designing experiments, collecting data, generating features, training models, and doing A/B testing.
- How Netflix handles "facts" or input data for recommendations, including how facts change over time and how they are logged and stored at scale.
- The challenges of logging and accessing facts at Netflix's scale, and how they are addressing issues like deduplication, performance, and supporting different access patterns.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Bighead is Airbnb's machine learning infrastructure that was created to:
- Standardize and simplify the ML development workflow;
- Reduce the time and effort to build ML models from weeks/months to days/weeks; and
- Enable more teams at Airbnb to utilize ML.
It provides shared services and tools for data management, model training/inference, and model management to make the ML process more efficient and production-ready. This includes services like Zipline for feature storage, Redspot for notebook environments, Deep Thought for online inference, and the Bighead UI for model monitoring.
TensorFlow 16: Building a Data Science Platform Seldon
1. The document discusses building a data science platform on DC/OS to operationalize machine learning models. It outlines challenges at each stage of the ML pipeline and how DC/OS addresses them with distributed computing capabilities and services for data storage, processing, model training and deployment.
2. Key stages covered include data preparation, distributed training using frameworks like TensorFlow, model management with storage of trained models, and low-latency model serving for production with TensorFlow Serving.
3. DC/OS provides a full-stack platform to operationalize ML at scale through distributed computing resources, container orchestration, and integration of open source data and ML services.
A Context Map will visualize your system: cluttered models, too much or not enough communication, dependencies on other systems are just some of the insights you'll gain if your start using them
Machine Learning Powered by Graphs - Alessandro NegroGraphAware
Graph-based machine learning is becoming a very important trend in Artificial Intelligence, transcending a lot of other techniques. The world's largest companies are promoting this trend. For instance Google Expander's platform combines semi-supervised machine learning with large-scale graph-based learning by building a multi-graph representation of the data with nodes corresponding to objects or concepts and edges connecting concepts that share similarities.
Using graphs as basic representation of data for machine learning purposes has several advantages: (i) the data is already modelled for further analysis, explicitly representing connections and relationships between things and concepts; (ii) graphs can easily combine multiple sources into a single graph representation and learn over them, creating Knowledge Graphs; (iii) a lot of machine learning algorithms exploit graphs for improving computation performances and results quality.
The presentation shows the advantages above presenting also some applications like recommendation engine and natural language processing that use machine learning over a graph. Concrete scenarios, models and end-to-end infrastructure will be discussed.
The document discusses Domain Driven Design (DDD), a software development approach that focuses on building an object-oriented model of the domain that software needs to represent. It emphasizes modeling the domain closely after the structure and language of the problem domain. Key aspects of DDD discussed include ubiquitous language, bounded contexts, entities, value objects, aggregate roots, repositories, specifications, domain services, modules, domain events, and command query separation. DDD is best suited for projects with a significant domain complexity where closely modeling the problem domain can help manage that complexity.
SQL o NoSQL? Progettare applicazioni 'Big Data-ready' attraverso l'utilizzo d...Codemotion
"SQL o NoSQL? Progettare applicazioni 'Big Data-ready' attraverso l'utilizzo della Polyglot Persistence" by Mario Cartia
Il teorema CAP, formulato da Eric Brewer nel 1998, è una congettura matematica secondo la quale un sistema distribuito non è in grado garantire contemporaneamente scalabilità orizzontale, disponibilità e consistenza. Da qui l’esigenza di un approccio semplice per la progettazione di applicazioni che utilizzano una molteplicità di tecnologie per lo storage dei dati (RDBMS, NoSQL, DFS, etc.) nascondendo tale complessità sottostante attraverso la realizzazione di un’unica interfaccia esterna per l’accesso agli stessi.
Graph Database Prototyping made easy with GraphgenGraphAware
Graphgen aims at helping people prototyping a graph database, by providing a visual tool that ease the generation of nodes and relationships with a Cypher DSL.
Many people struggle with not only creating a good graph model of their domain but also with creating sensible example data to test hypotheses or use-cases.
Graphgen aims at helping people with no time but a good enough understanding of their domain model, by providing a visual dsl for data model generation which borrows heavily on Neo4j Cypher graph query language.
The ascii art allows even non-technical users to write and read model descriptions/configurations as concise as plain english but formal enough to be parseable. The underlying generator combines the DSL inputs (structure, cardinalities and amount-ranges) and combines them with a comprehensive fake data generation library to create real-world-like datasets of medium/arbitrary size and complexity.
Users can create their own models combining the basic building blocks of the dsl and share their data-descriptions with others with a simple link.
Domain Driven Design Big Picture Strategic PatternsMark Windholtz
The document discusses Domain-Driven Design (DDD), an approach to software development for complex problems. It provides an overview of DDD and strategic patterns for organizing large projects with multiple teams, such as defining bounded contexts and context maps. Context maps describe the relationships between models, including shared kernels, customer/supplier, and conformist relationships. The document emphasizes defining a ubiquitous language within each context and mapping contexts to understand integration strategies at a large scale.
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
The document discusses Vertex AI pipelines for MLOps workflows. It begins with an introduction of the speaker and their background. It then discusses what MLOps is, defining three levels of automation maturity. Vertex AI is introduced as Google Cloud's managed ML platform. Pipelines are described as orchestrating the entire ML workflow through components. Custom components and conditionals allow flexibility. Pipelines improve reproducibility and sharing. Changes can trigger pipelines through services like Cloud Build, Eventarc, and Cloud Scheduler to continuously adapt models to new data.
Attended Amazon Web Services Dev Day 2018. Shared experience with all office colleagues by taking a presentation. Presentation covers Chaos Engineering, Serverless Architecture, IoT, Serverless Data Lake.
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/p4iAnxwC_Eg
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models
Mateusz is a software developer who loves all things distributed, machine learning and hates buzzwords. His favourite hobby data juggling.
He obtained his M.Sc. in Computer Science from AGH UST in Krakow, Poland, during which he did an exchange at L’ECE Paris in France and worked on distributed flight booking systems. After graduation he move to Tokyo to work as a researcher at Fujitsu Laboratories on machine learning and NLP projects, where he is still currently based.
Demystifying the 3d web - Codemotion 2016Pietro Grandi
The slides I used for my talk at Codemotion Rome and Dubai in 2016.
In talk I explain what is WebGL and we is the most powerful technology you can choose to deliver 3D contents over the web. Then I present some of the biggest frameworks: ThreeJS, BabylonJS, OSGJS and SceneJS.
Later I show real world case studies (Google, Autodesk and Unity) and in the end I rebut some myths about security concerns.
How to Empower a Platform With a Data Pipeline At a ScaleDeepak Sood
StashFin provides personal loans to individuals in India through a web and mobile platform. They have originated over 620,000 loans since being founded in 2016. To scale their platform, StashFin moved from a monolithic architecture to a microservices architecture using AWS services. This included using S3 for storage, EKS for Kubernetes, and AWS Glue and Athena for analytics. They also designed a data pipeline on AWS to handle a large increase in loan applications. The pipeline uses Redis for caching, S3 as the data lake, and Athena for querying large amounts of data stored in S3. This has allowed for faster decisioning, higher reliability, and cost and performance benefits compared to managing their own infrastructure.
Connectivity is here (5 g, swarm,...). now, let's build interplanetary apps! (1)Samy Fodil
Webinar recording: https://youtu.be/t30Aa-mq93Q
Do you need to build scalable 5G and IoT applications? Or, maybe distribute the computing required by AR/VR throughout the data path? Perhaps you need to implement Digital Twins? Well you've come to the right place.
Edge Computing is a paradigm that distributes computing and data storage between the Cloud and the users. In fact, the data center infrastructure that sits between you and the Cloud is actually larger than all the Cloud data centers combined. For over two decades, thanks to that Edge infrastructure you've been able to watch videos and smoothly surf the web. Today the "Edge" is powering all the automation around you; for example, smart cities, smart cars, smart factories, etc.
Enterprise Desktops Well Served - a technical perspective on virtual desktopsMolten Technologies
This document discusses desktops as a service (DaaS) and the technical challenges of deploying virtual desktop solutions in an enterprise. It outlines recommendations for addressing challenges in areas like networking, storage, servers, offline access, and licensing. While DaaS currently delivers virtual desktop operating systems, the document predicts that technologies like rich internet applications will allow DaaS to move away from true desktop OSes. Further development is still needed for applications and cloud services to integrate seamlessly.
The journey toward a self-service data platform at Netflix - sf 2019Karthik Murugesan
The Netflix data platform is a massive-scale, cloud-only suite of tools and technologies. It includes big data tech (Spark and Flink), enabling services (federated metadata management), and machine learning support. But with power comes complexity. Kurt Brown explains how Netflix is working toward an easier, "self-service" data platform without sacrificing any enabling capabilities.
Domain driven design and model driven developmentDmitry Geyzersky
This document discusses domain driven design and model driven development. It introduces ontology and how it relates to domain driven design. The document outlines key domain driven design building blocks like the domain model, entities, value objects, repositories, services, and aggregates. It also discusses challenges of domain driven design and code generation techniques.
Bighead: Airbnb's end-to-end machine learning platform
Airbnb has a wide variety of ML problems ranging from models on traditional structured data to models built on unstructured data such as user reviews, messages and listing images. The ability to build, iterate on, and maintain healthy machine learning models is critical to Airbnb’s success. Bighead aims to tie together various open source and in-house projects to remove incidental complexity from ML workflows. Bighead is built on Python, Spark, and Kubernetes. The components include a lifecycle management service, an offline training and inference engine, an online inference service, a prototyping environment, and a Docker image customization tool. Each component can be used individually. In addition, Bighead includes a unified model building API that smoothly integrates popular libraries including TensorFlow, XGBoost, and PyTorch. Each model is reproducible and iterable through standardization of data collection and transformation, model training environments, and production deployment. This talk covers the architecture, the problems that each individual component and the overall system aims to solve, and a vision for the future of machine learning infrastructure. It’s widely adopted in Airbnb and we have variety of models running in production. We plan to open source Bighead to allow the wider community to benefit from our work.
Speaker: Andrew Hoh
Andrew Hoh is the Product Manager for the ML Infrastructure and Applied ML teams at Airbnb. Previously, he has spent time building and growing Microsoft Azure's NoSQL distributed database. He holds a degree in computer science from Dartmouth College.
Building Intelligent Applications, Experimental ML with Uber’s Data Science W...Databricks
In this talk, we will explore how Uber enables rapid experimentation of machine learning models and optimization algorithms through the Uber’s Data Science Workbench (DSW). DSW covers a series of stages in data scientists’ workflow including data exploration, feature engineering, machine learning model training, testing and production deployment. DSW provides interactive notebooks for multiple languages with on-demand resource allocation and share their works through community features.
It also has support for notebooks and intelligent applications backed by spark job servers. Deep learning applications based on TensorFlow and Torch can be brought into DSW smoothly where resources management is taken care of by the system. The environment in DSW is customizable where users can bring their own libraries and frameworks. Moreover, DSW provides support for Shiny and Python dashboards as well as many other in-house visualization and mapping tools.
In the second part of this talk, we will explore the use cases where custom machine learning models developed in DSW are productionized within the platform. Uber applies Machine learning extensively to solve some hard problems. Some use cases include calculating the right prices for rides in over 600 cities and applying NLP technologies to customer feedbacks to offer safe rides and reduce support costs. We will look at various options evaluated for productionizing custom models (server based and serverless). We will also look at how DSW integrates into the larger Uber’s ML ecosystem, e.g. model/feature stores and other ML tools, to realize the vision of a complete ML platform for Uber.
This document discusses recommendations and machine learning at Netflix. It provides an overview of:
- How Netflix provides personalized recommendations on member homepages to help them find content to watch.
- Netflix's experimentation cycle of designing experiments, collecting data, generating features, training models, and doing A/B testing.
- How Netflix handles "facts" or input data for recommendations, including how facts change over time and how they are logged and stored at scale.
- The challenges of logging and accessing facts at Netflix's scale, and how they are addressing issues like deduplication, performance, and supporting different access patterns.
Bighead: Airbnb’s End-to-End Machine Learning Platform with Krishna Puttaswa...Databricks
Bighead is Airbnb's machine learning infrastructure that was created to:
- Standardize and simplify the ML development workflow;
- Reduce the time and effort to build ML models from weeks/months to days/weeks; and
- Enable more teams at Airbnb to utilize ML.
It provides shared services and tools for data management, model training/inference, and model management to make the ML process more efficient and production-ready. This includes services like Zipline for feature storage, Redspot for notebook environments, Deep Thought for online inference, and the Bighead UI for model monitoring.
TensorFlow 16: Building a Data Science Platform Seldon
1. The document discusses building a data science platform on DC/OS to operationalize machine learning models. It outlines challenges at each stage of the ML pipeline and how DC/OS addresses them with distributed computing capabilities and services for data storage, processing, model training and deployment.
2. Key stages covered include data preparation, distributed training using frameworks like TensorFlow, model management with storage of trained models, and low-latency model serving for production with TensorFlow Serving.
3. DC/OS provides a full-stack platform to operationalize ML at scale through distributed computing resources, container orchestration, and integration of open source data and ML services.
A Context Map will visualize your system: cluttered models, too much or not enough communication, dependencies on other systems are just some of the insights you'll gain if your start using them
Machine Learning Powered by Graphs - Alessandro NegroGraphAware
Graph-based machine learning is becoming a very important trend in Artificial Intelligence, transcending a lot of other techniques. The world's largest companies are promoting this trend. For instance Google Expander's platform combines semi-supervised machine learning with large-scale graph-based learning by building a multi-graph representation of the data with nodes corresponding to objects or concepts and edges connecting concepts that share similarities.
Using graphs as basic representation of data for machine learning purposes has several advantages: (i) the data is already modelled for further analysis, explicitly representing connections and relationships between things and concepts; (ii) graphs can easily combine multiple sources into a single graph representation and learn over them, creating Knowledge Graphs; (iii) a lot of machine learning algorithms exploit graphs for improving computation performances and results quality.
The presentation shows the advantages above presenting also some applications like recommendation engine and natural language processing that use machine learning over a graph. Concrete scenarios, models and end-to-end infrastructure will be discussed.
The document discusses Domain Driven Design (DDD), a software development approach that focuses on building an object-oriented model of the domain that software needs to represent. It emphasizes modeling the domain closely after the structure and language of the problem domain. Key aspects of DDD discussed include ubiquitous language, bounded contexts, entities, value objects, aggregate roots, repositories, specifications, domain services, modules, domain events, and command query separation. DDD is best suited for projects with a significant domain complexity where closely modeling the problem domain can help manage that complexity.
SQL o NoSQL? Progettare applicazioni 'Big Data-ready' attraverso l'utilizzo d...Codemotion
"SQL o NoSQL? Progettare applicazioni 'Big Data-ready' attraverso l'utilizzo della Polyglot Persistence" by Mario Cartia
Il teorema CAP, formulato da Eric Brewer nel 1998, è una congettura matematica secondo la quale un sistema distribuito non è in grado garantire contemporaneamente scalabilità orizzontale, disponibilità e consistenza. Da qui l’esigenza di un approccio semplice per la progettazione di applicazioni che utilizzano una molteplicità di tecnologie per lo storage dei dati (RDBMS, NoSQL, DFS, etc.) nascondendo tale complessità sottostante attraverso la realizzazione di un’unica interfaccia esterna per l’accesso agli stessi.
Graph Database Prototyping made easy with GraphgenGraphAware
Graphgen aims at helping people prototyping a graph database, by providing a visual tool that ease the generation of nodes and relationships with a Cypher DSL.
Many people struggle with not only creating a good graph model of their domain but also with creating sensible example data to test hypotheses or use-cases.
Graphgen aims at helping people with no time but a good enough understanding of their domain model, by providing a visual dsl for data model generation which borrows heavily on Neo4j Cypher graph query language.
The ascii art allows even non-technical users to write and read model descriptions/configurations as concise as plain english but formal enough to be parseable. The underlying generator combines the DSL inputs (structure, cardinalities and amount-ranges) and combines them with a comprehensive fake data generation library to create real-world-like datasets of medium/arbitrary size and complexity.
Users can create their own models combining the basic building blocks of the dsl and share their data-descriptions with others with a simple link.
Domain Driven Design Big Picture Strategic PatternsMark Windholtz
The document discusses Domain-Driven Design (DDD), an approach to software development for complex problems. It provides an overview of DDD and strategic patterns for organizing large projects with multiple teams, such as defining bounded contexts and context maps. Context maps describe the relationships between models, including shared kernels, customer/supplier, and conformist relationships. The document emphasizes defining a ubiquitous language within each context and mapping contexts to understand integration strategies at a large scale.
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
The document discusses Vertex AI pipelines for MLOps workflows. It begins with an introduction of the speaker and their background. It then discusses what MLOps is, defining three levels of automation maturity. Vertex AI is introduced as Google Cloud's managed ML platform. Pipelines are described as orchestrating the entire ML workflow through components. Custom components and conditionals allow flexibility. Pipelines improve reproducibility and sharing. Changes can trigger pipelines through services like Cloud Build, Eventarc, and Cloud Scheduler to continuously adapt models to new data.
Attended Amazon Web Services Dev Day 2018. Shared experience with all office colleagues by taking a presentation. Presentation covers Chaos Engineering, Serverless Architecture, IoT, Serverless Data Lake.
Machine Learning Interpretability - Mateusz Dymczyk - H2O AI World London 2018Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/p4iAnxwC_Eg
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models
Mateusz is a software developer who loves all things distributed, machine learning and hates buzzwords. His favourite hobby data juggling.
He obtained his M.Sc. in Computer Science from AGH UST in Krakow, Poland, during which he did an exchange at L’ECE Paris in France and worked on distributed flight booking systems. After graduation he move to Tokyo to work as a researcher at Fujitsu Laboratories on machine learning and NLP projects, where he is still currently based.
Demystifying the 3d web - Codemotion 2016Pietro Grandi
The slides I used for my talk at Codemotion Rome and Dubai in 2016.
In talk I explain what is WebGL and we is the most powerful technology you can choose to deliver 3D contents over the web. Then I present some of the biggest frameworks: ThreeJS, BabylonJS, OSGJS and SceneJS.
Later I show real world case studies (Google, Autodesk and Unity) and in the end I rebut some myths about security concerns.
How to Empower a Platform With a Data Pipeline At a ScaleDeepak Sood
StashFin provides personal loans to individuals in India through a web and mobile platform. They have originated over 620,000 loans since being founded in 2016. To scale their platform, StashFin moved from a monolithic architecture to a microservices architecture using AWS services. This included using S3 for storage, EKS for Kubernetes, and AWS Glue and Athena for analytics. They also designed a data pipeline on AWS to handle a large increase in loan applications. The pipeline uses Redis for caching, S3 as the data lake, and Athena for querying large amounts of data stored in S3. This has allowed for faster decisioning, higher reliability, and cost and performance benefits compared to managing their own infrastructure.
Connectivity is here (5 g, swarm,...). now, let's build interplanetary apps! (1)Samy Fodil
Webinar recording: https://youtu.be/t30Aa-mq93Q
Do you need to build scalable 5G and IoT applications? Or, maybe distribute the computing required by AR/VR throughout the data path? Perhaps you need to implement Digital Twins? Well you've come to the right place.
Edge Computing is a paradigm that distributes computing and data storage between the Cloud and the users. In fact, the data center infrastructure that sits between you and the Cloud is actually larger than all the Cloud data centers combined. For over two decades, thanks to that Edge infrastructure you've been able to watch videos and smoothly surf the web. Today the "Edge" is powering all the automation around you; for example, smart cities, smart cars, smart factories, etc.
Enterprise Desktops Well Served - a technical perspective on virtual desktopsMolten Technologies
This document discusses desktops as a service (DaaS) and the technical challenges of deploying virtual desktop solutions in an enterprise. It outlines recommendations for addressing challenges in areas like networking, storage, servers, offline access, and licensing. While DaaS currently delivers virtual desktop operating systems, the document predicts that technologies like rich internet applications will allow DaaS to move away from true desktop OSes. Further development is still needed for applications and cloud services to integrate seamlessly.
講演者 : 渡邊 仁 氏
株式会社NTTデータ 技術革新統括本部 システム技術本部 デジタルテクノロジ推進室
12月17日 Hyperledger Tokyo Meetup ー Let's Celebrate 5 Years of Hyperledger with Brian Behlendorf! で講演
A late upload. This slide was presented on Aug 31, 2019, when I delivered a talk for AIoT seminar in University of Lambung Mangkurat, Banjarbaru. It's part of Republic of IoT 2019 event.
Many HPC applications are massively parallel and can benefit from the spatial parallelism offered by reconfigurable logic. While modern memory technologies can offer high bandwidth, designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. Addressing these challenges requires to combine compiler optimizations, high-level synthesis, and hardware design.
In this talk, I will present challenges, solutions, and trends for generating massively parallel accelerators on FPGA for high-performance computing. These architectures can provide performance comparable to software implementations on high-end processors, and much higher energy efficiency thanks to logic customization.
The document discusses best practices for accelerating the transformation to mature software manufacturing using DevOps principles. It recommends adopting modern approaches like microservices, serverless architectures, infrastructure as code, and event-driven architectures to increase speed and agility. Automating testing and deployments through continuous integration/delivery (CI/CD) pipelines is key. The document advocates treating infrastructure like code and using tools like AWS CodePipeline and GitLab to enable self-service platforms and faster delivery through automation.
Dart on Arm - Flutter Bangalore June 2021Chris Swan
Running Dart on Arm servers, covering the trade offs between JIT and AOT. The dependencies needed for building and running AOT binaries, and how to cross compile Arm binaries.
Serverless is now well established pattern for all things Cloud. As we leverage this style architecture with more power we require more control. Discover how good architects and developers design and develop serverless platforms for the enterprise. We describe a framework that will move your serverless systems from good to great and help you grow our connected world.
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...HostedbyConfluent
"Unlike just a few years ago, today the lakehouse architecture is an established data platform embraced by all major cloud data companies such as AWS, Azure, Google, Oracle, Microsoft, Snowflake and Databricks.
This session kicks off with a technical, no-nonsense introduction to the lakehouse concept, dives deep into the lakehouse architecture and recaps how a data lakehouse is built from the ground up with streaming as a first-class citizen.
Then we focus on serverless for streaming use cases. Serverless concepts are well-known from developers triggering hundreds of thousands of AWS Lambda functions at a negligible cost. However, the same concept becomes more interesting when looking at data platforms.
We have all heard about the principle ""It runs best on Powerpoint"", so I decided to skip slides here and bring a serverless demo instead:
A hands-on, fun, and interactive serverless streaming use case example where we ingest live events from hundreds of mobile devices (don't miss out - bring your phone and be part of it!!). Based on this use case I will critically explore how much of a modern lakehouse is serverless and how we implemented that at Databricks (spoiler alert: serverless is everywhere from data pipelines, workflows, optimized Spark APIs, to ML).
TL;DR benefits for the Data Practitioners:
-Recap the OSS foundation of the Lakehouse architecture and understand its appeal
- Understand the benefits of leveraging a lakehouse for streaming and what's there beyond Spark Structured Streaming.
- Meat of the talk: The Serverless Lakehouse. I give you the tech bits beyond the hype. How does a serverless lakehouse differ from other serverless offers?
- Live, hands-on, interactive demo to explore serverless data engineering data end-to-end. For each step we have a critical look and I explain what it means, e.g for you saving costs and removing operational overhead."
The document provides an agenda for a presentation on ThousandEyes Network Assurance. It introduces the speakers Ian Waters and Anton Lindholm and outlines challenges of managing digital experiences across distributed infrastructure with decreasing visibility. It then describes how ThousandEyes addresses these challenges through global vantage points, telemetry data, and intelligence to provide end-to-end network visibility and assure digital experiences. A demo is included on the agenda.
The document provides an overview of a presentation about ThousandEyes Digital Experience Assurance. It introduces the speakers and outlines the agenda which includes introducing ThousandEyes, discussing modern assurance challenges, a demo, and Q&A. It discusses how digital experiences impact business and outlines the challenge of managing experiences across distributed infrastructure with shrinking visibility. ThousandEyes is presented as a solution to assure digital experiences over any network with end-to-end visibility, intelligence, and automated workflows.
MobilFlex - BP Presentation - 2023.3.pdfMihai Buta
The document proposes making PCs into personal data centers (PDCs) to enable edge computing, edge AI, IoT, and related applications. It notes current limitations like lack of infrastructure, processing power, and resources at the edge. The proposed solution is to improve the x86 architecture to make it more vertically layered, concurrent, flexible and scalable. This would provide 3x more processing power without cost increases. It would enable stage 3 and 4 local processing for IoT, make edge AI cloud-independent, and power smarter tinyML devices and BYOD systems. The goal is to build LOT (LAN of Things) accelerators, chipsets and end products like the PDC to serve as affordable mini data centers for individuals
The document introduces cloud computing concepts such as defining cloud, different cloud service models including SaaS, PaaS, IaaS, and DaaS, benefits of cloud computing like reduced costs and increased flexibility, and factors to consider when evaluating cloud providers like data center location and certifications. It also provides an example of a recent healthcare client migration to a private cloud for electronic health records and other applications.
This document discusses next-generation sequencing and Dell EMC storage solutions for NGS workflows. It addresses key challenges of rapid analysis and efficient data management for petabytes of genomic data. Dell EMC PowerScale storage provides scalable storage, data management tools like CloudIQ and DataIQ, and integrations with NVIDIA Parabricks to accelerate secondary analysis through GPU computing. The document also covers architectures for life sciences organizations and compression technologies like Petagene that can reduce genomic data sizes by 60-90% for faster data processing.
Leverage Cloud Computing to Accelerate Development and TestRightScale
RightScale Webinar: November 18, 2010 – Watch this webinar to learn more about how you can leverage cloud computing to simplify and accelerate your DB2 development and testing.
This DB2 Chat with the Lab is brought to you in collaboration between IBM and RightScale.
- Rogue Wave is a provider of software development tools and components for high performance computing (HPC) applications. It has experience developing tools for parallel architectures.
- The landscape of application development is evolving due to increasing data complexity, new computing architectures like multi-core/GPU systems, and pressure to produce high quality applications efficiently.
- Rogue Wave aims to enable the next era of HPC by providing a workbench for developers with tools that increase productivity, support multiple languages and platforms, and help leverage large amounts of data and parallelism.
This document provides an overview of edge computing, including its evolution, driving factors, architectures, applications, trends, challenges, and device management. Edge computing aims to process data closer to where it is generated in order to reduce latency and bandwidth usage. The document outlines architectures like fog computing, cloudlet computing, and multi-access edge computing. It also discusses embedded hardware platforms, applications, and presents challenges of edge computing such as network bandwidth, security, and device management.
Essence of Requirements Engineering: Pragmatic Insights for 2024Asher Sterkin
My presentation to MTA (The Academic College of Tel Aviv-Yaffo) postgraduate students offering my take on the latest advancements in the Software Requirements Engineering field. This was a fantastic opportunity to bridge key concepts that have intrigued me for the long time.
Cloud Infrastructure from Python Code: PyCon DE-23Asher Sterkin
My joint presentation with Etzik Bega at PyConDE 2023. What infrastructure from Python code technology is about? What problems it's trying to solve? How far could we go with it? Is it too good to be true?
This document discusses enabling infrastructure from Python code through PyIFC. PyIFC aims to empower Python programmers to develop, deploy, and operate cloud software easily without vendor lock-in. It addresses issues with current infrastructure as code approaches like low abstraction, complexity, and vendor specificity. The PyIFC solution treats cloud resources as first-class citizens in Python code through adapters. This allows optimizing deployment locations and managing resources through pure Python. The presenter argues the potential is much bigger if Python is fully adapted for cloud-native development and common standards are established.
This document profiles Asher Sterkin, SVP of Engineering at BST LABS and former VP of Technology. It discusses his vision of "Infrastructure from Code" which aims to allow Python programmers to develop, deploy, and manage cloud infrastructure and applications through code. Key points include:
- Describing current challenges with infrastructure as code (IaC) approaches like being vendor-specific and requiring deep technical knowledge
- Introducing the concept of an "Infrastructure from Code" (IFC) solution to raise the level of abstraction and bridge the gap between development and operations
- Explaining how an IFC approach could provide a high-level API and templates to configure cloud services through
pyjamas22_ generic composite in python.pdfAsher Sterkin
This document discusses a generic composite design pattern in Python. It begins with an introduction to design patterns and the composite pattern. It then describes the limitations of a traditional object-oriented implementation of the composite pattern. The document proposes an alternative implementation using decorators, iterators, and other patterns. Code examples are provided to demonstrate how this generic composite pattern can be applied to build templates for cloud infrastructure and Kubernetes manifests.
Documenting serverless architectures could we do it better - o'reily sa con...Asher Sterkin
The document discusses documenting serverless architectures. It introduces serverless architecture and some of its benefits and challenges, including the lack of clear guidelines around choosing different serverless computing options. It proposes using several views - use case view, logical view, process view, implementation view, and deployment view - based on the 4+1 architectural view model to document serverless architectures. Examples of using sequence diagrams and collaboration diagrams for the logical view and process view are provided to illustrate how different views can capture various aspects of the system architecture.
Domain driven design: a gentle introductionAsher Sterkin
This document provides an overview of Domain-Driven Design (DDD) concepts including:
- The common language shared between domain experts and software developers.
- Using models to capture the semantics of the domain language.
- Identifying multiple domain models with clear boundaries and mappings between them.
- Nesting boundaries at different levels of granularity such as sub-domains, bounded contexts, and aggregates.
- Key DDD patterns like entities, values, events, commands, and aggregates.
The document provides a summary of various strategy tools that can be useful for startups. It discusses tools for context awareness when dealing with complex systems (Cynefin), analyzing value chain evolution and movement (Wardley Maps), platform and ecosystem plays using an Innovate-Leverage-Commoditize approach, identifying bounded contexts and subdomain strategies using Strategic Domain-Driven Design, capturing customer insights using a Value Proposition Canvas and Business Model Canvas, and validating hypotheses using the Lean Startup methodology. The document recommends these various tools as part of a startup's strategic toolbox to help guide strategic decision making.
This document discusses using domain-driven design principles and patterns for serverless architectures. It begins with an introduction of the speaker and overview of topics to be covered. Then it discusses how bounded contexts from DDD map well to microservices and serverless functions. Several DDD patterns are explained for the serverless context, including repositories using CQRS and event sourcing. Strategic DDD is discussed as an organizing principle to prevent unstructured serverless applications. The document concludes by discussing challenges of measuring productivity for serverless/DDD approaches.
Brutally, "at the edge of crime", simplified overview of the Wardley Maps technique integrated with Lean Startup and Strategic Domain-Driven Design. Presented at A2B Accelerator, Jerusalem on April 20 2017.
What is exactly anti fragile in dev ops - v3Asher Sterkin
This document discusses concepts related to anti-fragility from the book "Antifragile" applied to DevOps. It covers key topics such as continuous delivery, reducing batch sizes, embedding project knowledge into operations and vice versa, maintaining a barbell approach of balancing risk aversion with risk-taking, and ensuring asymmetric payoffs from failures versus successes. The document also examines anti-fragile characteristics of systems like Netflix and emphasizes making systems stronger through learning from failures rather than obsessively avoiding all failures.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.