The document discusses the Java Memory Model (JMM). It begins by correcting common fallacies about the JMM, noting that it describes allowed program executions and thread visibility rather than garbage collection. It then provides definitions and explanations of memory models from experts like Sarita Adve and Bill Pugh. The rest of the document discusses concepts like Moore's Law, Amdahl's Law, barriers/fences, hardware impacts, and recommendations to build an understanding of JMM patterns rather than trying to fully understand the specification.
The Software Challenges of Building Smart Chatbots - ICSE'21Jordi Cabot
Chatbots are popular solutions assisting humans in multiple fields, such as customer support or e-learning. However, building such applications has become a complex task requiring a high-level of expertise in a variety of technical domains. Chatbots need to integrate (AI-based) NLU components, but also connect to internal/external services, deploy on various platforms, etc.
The briefing will first cover the current landscape of chatbot frameworks. Then, we’ll get our hands dirty and create a few bots of increasing difficulty playing with aspects like entity recognition, sentiment analysis, event processing, or testing. By the end of the session, attendees will have all the keys to understand the main steps and obstacles to building a good chatbot.
Dealing with contributor overload - FOSS BackstageHolden Karau
The first external person contributing to our project is amazing, but when that 1 snowballs to 1,000 life can get a little bit stressful. All of these fine lovely people want to help, but somehow no one seems to want to deal with code reviews, proposed documentation changes, or keeping your testing infrastructure alive, or maybe they just want to pull in different directions.
This talk explores what happens as a community grows and provides recommendations to organize your community. We’ll focus on how to control the fun chaos and how to build a development path that keeps your comitters engaged and your community growing. All of these are based on the speakers’ experiences in their own personal projects (which have much less than 1k contributors) as well as larger projects, like Apache Spark.
Come for the being told it’s not your fault, stay for the techniques to avoid pissing everyone off.
P.S.
If one of the speakers is behind on reviewing one of your pull requests she is sorry and would like to offer you a sticker and hope this talk explains some of why she is late.
Video - https://youtu.be/XS8cTLAuHUw
rohit sharma - dev ops virtual assistant - automate devops stuffs using nlp a...Dariia Seimova
Stanley was a sysadmin who was bored with repetitive tasks. He adopted DevOps practices like automation but still had to manually trigger commands. He then started using ChatOps through Slack but still had to send commands. Finally, he developed VoiceOps to automate tasks through voice assistants like Google Assistant. This allows even non-technical executives to get answers by speaking naturally. Some tasks he automated include fetching server lists, checking statuses and uptime, executing jobs, and restarting services. The solution uses APIs, scripts, and integrates automation with NLP chatbots. This provides benefits like speed, natural language, context, and frictionless use.
This presentation is a part of the MosesCore project that encourages the development and usage of open source machine translation tools, notably the Moses statistical MT toolkit.
MosesCore is supported by the European Commission Grant Number 288487 under the 7th Framework Programme.
For the latest updates, follow us on Twitter - #MosesCore
A computer program is a set of instructions that tells a computer what tasks to perform. Programming languages allow humans to write code in a language the computer can understand. There are high-level languages like C++ and Java that are closer to human language, and low-level assembly languages that are closer to machine language. All programs must eventually be translated into machine language by a compiler or interpreter so the computer can execute the instructions. The basics of computer programming involve understanding what a program is, learning a programming language, and using tools like compilers and interpreters to translate code into a format the computer can understand and run.
So we're running Apache ZooKeeper. Now What? By Camille Fournier Hakka Labs
The ZooKeeper framework was originally built at Yahoo! to make it easy for the company’s applications to access configuration information in a robust and easy-to-understand way, but it has since grown to offer a lot of features that help coordinate work across distributed clusters. Apache Zookeeper became a de-facto standard for coordination service and used by Storm, Hadoop, HBase, ElasticSearch and other distributed computing frameworks.
The Software Challenges of Building Smart Chatbots - ICSE'21Jordi Cabot
Chatbots are popular solutions assisting humans in multiple fields, such as customer support or e-learning. However, building such applications has become a complex task requiring a high-level of expertise in a variety of technical domains. Chatbots need to integrate (AI-based) NLU components, but also connect to internal/external services, deploy on various platforms, etc.
The briefing will first cover the current landscape of chatbot frameworks. Then, we’ll get our hands dirty and create a few bots of increasing difficulty playing with aspects like entity recognition, sentiment analysis, event processing, or testing. By the end of the session, attendees will have all the keys to understand the main steps and obstacles to building a good chatbot.
Dealing with contributor overload - FOSS BackstageHolden Karau
The first external person contributing to our project is amazing, but when that 1 snowballs to 1,000 life can get a little bit stressful. All of these fine lovely people want to help, but somehow no one seems to want to deal with code reviews, proposed documentation changes, or keeping your testing infrastructure alive, or maybe they just want to pull in different directions.
This talk explores what happens as a community grows and provides recommendations to organize your community. We’ll focus on how to control the fun chaos and how to build a development path that keeps your comitters engaged and your community growing. All of these are based on the speakers’ experiences in their own personal projects (which have much less than 1k contributors) as well as larger projects, like Apache Spark.
Come for the being told it’s not your fault, stay for the techniques to avoid pissing everyone off.
P.S.
If one of the speakers is behind on reviewing one of your pull requests she is sorry and would like to offer you a sticker and hope this talk explains some of why she is late.
Video - https://youtu.be/XS8cTLAuHUw
rohit sharma - dev ops virtual assistant - automate devops stuffs using nlp a...Dariia Seimova
Stanley was a sysadmin who was bored with repetitive tasks. He adopted DevOps practices like automation but still had to manually trigger commands. He then started using ChatOps through Slack but still had to send commands. Finally, he developed VoiceOps to automate tasks through voice assistants like Google Assistant. This allows even non-technical executives to get answers by speaking naturally. Some tasks he automated include fetching server lists, checking statuses and uptime, executing jobs, and restarting services. The solution uses APIs, scripts, and integrates automation with NLP chatbots. This provides benefits like speed, natural language, context, and frictionless use.
This presentation is a part of the MosesCore project that encourages the development and usage of open source machine translation tools, notably the Moses statistical MT toolkit.
MosesCore is supported by the European Commission Grant Number 288487 under the 7th Framework Programme.
For the latest updates, follow us on Twitter - #MosesCore
A computer program is a set of instructions that tells a computer what tasks to perform. Programming languages allow humans to write code in a language the computer can understand. There are high-level languages like C++ and Java that are closer to human language, and low-level assembly languages that are closer to machine language. All programs must eventually be translated into machine language by a compiler or interpreter so the computer can execute the instructions. The basics of computer programming involve understanding what a program is, learning a programming language, and using tools like compilers and interpreters to translate code into a format the computer can understand and run.
So we're running Apache ZooKeeper. Now What? By Camille Fournier Hakka Labs
The ZooKeeper framework was originally built at Yahoo! to make it easy for the company’s applications to access configuration information in a robust and easy-to-understand way, but it has since grown to offer a lot of features that help coordinate work across distributed clusters. Apache Zookeeper became a de-facto standard for coordination service and used by Storm, Hadoop, HBase, ElasticSearch and other distributed computing frameworks.
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPTAnant Corporation
This document provides an agenda for a full-day bootcamp on large language models (LLMs) like GPT-3. The bootcamp will cover fundamentals of machine learning and neural networks, the transformer architecture, how LLMs work, and popular LLMs beyond ChatGPT. The agenda includes sessions on LLM strategy and theory, design patterns for LLMs, no-code/code stacks for LLMs, and building a custom chatbot with an LLM and your own data.
Lightning talk on Java Memory Consistency Model Java Day Kiev 2014Tomek Borek
Brief introduction into what Memory Model is about and why it matters and when. Explanation why it was changed in Tiger and how. Also informs where to look for more and offers definition. I've gave this talk at PJUG and Java Day Kiev in 2014.
This document provides 10 tips for analyzing Wikipedia public data:
1. Be aware of special page types like disambiguation pages and redirects that need filtering.
2. Plan hardware carefully, prioritizing memory over disk and considering database engine configuration.
3. Fine tune database engine parameters to your hardware and exploit memory.
4. Use source control, publish code publicly, document code, and include testing.
5. Consider tools like Python and Perl that are well-suited to Wikipedia's text and link data formats.
6. Leverage existing solutions rather than reinventing functionality.
7. Automate processes to handle large datasets and enable reproducibility.
8. Expect
This document provides an opinionated guide to thinking functionally as a software architect. It recommends starting with Racket to learn functional programming concepts like recursion, higher order functions, and avoiding mutations. Some key aspects of functional programming discussed are using functions as first-class citizens, favoring recursion over iteration, laziness with streams, expressions over statements, and composition over inheritance. The document also mentions concepts like closures, pattern matching, currying, and persistent data structures. It recommends resources for learning more about functional programming in Racket, Lisp, Haskell, F#, OCaml, and Idris.
Episode 2: The LLM / GPT / AI Prompt / Data Engineer RoadmapAnant Corporation
In this episode we'll discuss the different flavors of prompt engineering in the LLM/GPT space. According to your skill level you should be able to pick up at any of the following:
Leveling up with GPT
1: Use ChatGPT / GPT Powered Apps
2: Become a Prompt Engineer on ChatGPT/GPT
3: Use GPT API with NoCode Automation, App Builders
4: Create Workflows to Automate Tasks with NoCode
5: Use GPT API with Code, make your own APIs
6: Create Workflows to Automate Tasks with Code
7: Use GPT API with your Data / a Framework
8: Use GPT API with your Data / a Framework to Make your own APIs
9: Create Workflows to Automate Tasks with your Data /a Framework
10: Use Another LLM API other than GPT (Cohere, HuggingFace)
11: Use open source LLM models on your computer
12: Finetune / Build your own models
Series: Using AI / ChatGPT at Work - GPT Automation
Are you a small business owner or web developer interested in leveraging the power of GPT (Generative Pretrained Transformer) technology to enhance your business processes?
If so, Join us for a series of events focused on using GPT in business. Whether you're a small business owner or a web developer, you'll learn how to leverage GPT to improve your workflow and provide better services to your customers.
LangChain Intro, Keymate.AI Search Plugin for ChatGPT, How to use langchain library? How to implement similar functionality in programming language of your choice? Example LangChain applications.
The presentation revolves around the concept of "langChain", This innovative framework is designed to "chain" together different components to create more advanced use cases around Large Language Models (LLMs). The idea is to leverage the power of LLMs to tackle complex problems and generate solutions that are more than the sum of their parts.
One of the key features of the presentation is the application of the "Keymate.AI Search" plugin in conjunction with the Reasoning and Acting Chain of Thought (ReAct) framework. The presenter encourages the audience to utilize these tools to generate reasoning traces and actions. The ReAct framework, learned from an initial search, is then applied to these traces and actions, demonstrating the potential of LLMs to learn and apply complex frameworks.
The presentation also delves into the impact of climate change on biodiversity. The presenter prompts the audience to look up the latest research on this topic and summarize the key findings. This exercise not only highlights the importance of climate change but also demonstrates the capabilities of LLMs in researching and summarizing complex topics.
The presentation concludes with several key takeaways. The presenter emphasizes that specialized custom solutions work best and suggests a bottom-up approach to expert systems. However, they caution that over-abstraction can lead to leakages, causing time and money limits to hit early and tasks to fail or require many iterations. The presenter also notes that while prompt engineering is important, it's not necessary to over-optimize if the LLM is clever. The presentation ends on a hopeful note, expressing a need for more clever LLMs and acknowledging that good applications are rare but achievable.
Overall, the presentation provides a comprehensive overview of the LanGCHAIN framework, its applications, and the potential of LLMs in solving complex problems. It serves as a call to action for the audience to explore these tools and frameworks.
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
What drives Innovation? Innovations And Technological Solutions for the Distr...Stefano Fago
Social networking and social marketing drove innovation over the last 5 years by creating a need for [1] big data to support large user numbers, [2] high performance to support big data, and [3] high scalability to support growth. This need led to the development of new technologies for custom and polyglot persistence, streaming analytics, high performance through concurrency and parallelism, and scalability through distributed algorithms and systems. Enabling fast evolution required tools for rapid development, testing, and deployment as well as skilled and engaged employees.
Enterprise Blockchain is here. Businesses can leverage the power of blockchain to achieve trust, transparency, and accountability. But are developers equipped with the knowledge to learn the tech and solve enterprise needs? I talk about what enterprise blockchain is and how developers can get started with it using simple example
Pair programming involves two programmers working together at one computer. One person acts as the driver who types code while the other navigates and reviews. It has benefits like catching mistakes earlier, improving design quality, transferring knowledge between partners, and creating a stronger sense of team. While there is initially a 15% overhead in time, studies show this is outweighed by fewer defects and a more flexible system in the long run. Effective pair programming requires collaboration, respect, communication and regularly alternating roles.
The document provides an overview of parallelization using OpenMP. It discusses how parallel programming models have evolved with hardware to improve performance and efficiency. It describes shared memory and message passing models like OpenMP and MPI. The document compares OpenMP and MPI, detailing their pros and cons. It explains how OpenMP can be used to achieve parallelism on shared memory systems using compiler directives and libraries.
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Microseconds matter in High Frequency Trading. This document discusses how software development for high frequency trading (HFT) requires optimizations for ultra-low latency. HFT systems make profits from many small, quick trades executed at very high speeds. Latency is critical for HFT systems to capture profitable opportunities and avoid losses. The document outlines various techniques for optimizing C++ code for latency, such as removing unnecessary operations from hot code paths, using memory pools instead of dynamic memory allocation, avoiding string operations, and improving branch prediction. It also discusses the importance of measuring system performance at a microsecond level.
Voxxed Athens 2018 - Methods and Practices for Guaranteed Failure in Big DataVoxxed Athens
The document provides guidance on practices that can lead to failure in big data systems. It warns against assumptions that schemas are unnecessary, that databases can scale reads and writes infinitely, and that network connections and hardware will always be available. Instead, it recommends defining schemas and metadata, understanding database models, preparing for failures through testing, and managing resources and data pipelines. Proper data governance, partitioning, replication, and understanding of consistency and transaction models can help avoid failures.
Instant LAMP Stack with Vagrant and PuppetPatrick Lee
Do you enjoy installing and configuring Apache, PHP, and MySQL every time you reinstall your OS or switch to a new machine? Neither do I. And we never have to do it again. Vagrant can use the VirtualBox API and configuration defined in Puppet to spin up a development VM in a couple of minutes. And it's really easy to do. I'll start with the simplest possible example and work up to a cluster of VM's. Feel free to bring your laptop and follow along.
Speaker:
Alex Cruise (Dir. Architecture, Metafor Software)
Abstract:
The rise of the DevOps movement has brought into welcome focus something that is often learned only through painful experience and expense: the success of a software product critically depends not only on its implementation, maintenance and enhancement, but also on how it’s deployed and operated.
Distributed systems are hard, but you can’t escape them: you need to scale out, but wrapping proxy interfaces around remote resources so they look local is a recipe for a fragile system. Plus, as the complexity of components and services increases, local systems aren’t actually as reliable as we think! Concurrency is hard, but you can’t escape it: whether you’re using threads in a single process, or multiple processes on a single machine, you still need to synchronize state between them somehow. Fault tolerance is hard, but you can’t escape it: parts will fail, you need to cope without rebooting the whole application. Correctness is hard, but you can’t escape it: whether through laborious testing or a Sufficiently Advanced Compiler, you need to have some assurance that the software will work as intended.
Let’s talk about a set of architectural patterns (and, yes, frameworks) that can really help us achieve the goals of concurrency, fault tolerance and correctness, while affording us the flexibility we need to scale our deployments when we achieve terrifying success.
This document discusses micro optimizations in C++ and their effectiveness. It begins by defining micro optimizations and noting that the real bottlenecks are often not within one's own code. It then discusses reasons both for and against micro optimizations, noting they can improve performance if used judiciously but also complicate code. The document covers measuring efficiency and complications that arise in C, C++ and higher-level languages. It emphasizes the importance of understanding what languages do behind the scenes and focusing optimizations on the "fast path" code used most frequently.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
More Related Content
Similar to 4Developers 2015: Java Memory Consistency Model or intro to multithreaded programming - Tomasz Borek, Jacek Jagieła
Automate your Job and Business with ChatGPT #3 - Fundamentals of LLM/GPTAnant Corporation
This document provides an agenda for a full-day bootcamp on large language models (LLMs) like GPT-3. The bootcamp will cover fundamentals of machine learning and neural networks, the transformer architecture, how LLMs work, and popular LLMs beyond ChatGPT. The agenda includes sessions on LLM strategy and theory, design patterns for LLMs, no-code/code stacks for LLMs, and building a custom chatbot with an LLM and your own data.
Lightning talk on Java Memory Consistency Model Java Day Kiev 2014Tomek Borek
Brief introduction into what Memory Model is about and why it matters and when. Explanation why it was changed in Tiger and how. Also informs where to look for more and offers definition. I've gave this talk at PJUG and Java Day Kiev in 2014.
This document provides 10 tips for analyzing Wikipedia public data:
1. Be aware of special page types like disambiguation pages and redirects that need filtering.
2. Plan hardware carefully, prioritizing memory over disk and considering database engine configuration.
3. Fine tune database engine parameters to your hardware and exploit memory.
4. Use source control, publish code publicly, document code, and include testing.
5. Consider tools like Python and Perl that are well-suited to Wikipedia's text and link data formats.
6. Leverage existing solutions rather than reinventing functionality.
7. Automate processes to handle large datasets and enable reproducibility.
8. Expect
This document provides an opinionated guide to thinking functionally as a software architect. It recommends starting with Racket to learn functional programming concepts like recursion, higher order functions, and avoiding mutations. Some key aspects of functional programming discussed are using functions as first-class citizens, favoring recursion over iteration, laziness with streams, expressions over statements, and composition over inheritance. The document also mentions concepts like closures, pattern matching, currying, and persistent data structures. It recommends resources for learning more about functional programming in Racket, Lisp, Haskell, F#, OCaml, and Idris.
Episode 2: The LLM / GPT / AI Prompt / Data Engineer RoadmapAnant Corporation
In this episode we'll discuss the different flavors of prompt engineering in the LLM/GPT space. According to your skill level you should be able to pick up at any of the following:
Leveling up with GPT
1: Use ChatGPT / GPT Powered Apps
2: Become a Prompt Engineer on ChatGPT/GPT
3: Use GPT API with NoCode Automation, App Builders
4: Create Workflows to Automate Tasks with NoCode
5: Use GPT API with Code, make your own APIs
6: Create Workflows to Automate Tasks with Code
7: Use GPT API with your Data / a Framework
8: Use GPT API with your Data / a Framework to Make your own APIs
9: Create Workflows to Automate Tasks with your Data /a Framework
10: Use Another LLM API other than GPT (Cohere, HuggingFace)
11: Use open source LLM models on your computer
12: Finetune / Build your own models
Series: Using AI / ChatGPT at Work - GPT Automation
Are you a small business owner or web developer interested in leveraging the power of GPT (Generative Pretrained Transformer) technology to enhance your business processes?
If so, Join us for a series of events focused on using GPT in business. Whether you're a small business owner or a web developer, you'll learn how to leverage GPT to improve your workflow and provide better services to your customers.
LangChain Intro, Keymate.AI Search Plugin for ChatGPT, How to use langchain library? How to implement similar functionality in programming language of your choice? Example LangChain applications.
The presentation revolves around the concept of "langChain", This innovative framework is designed to "chain" together different components to create more advanced use cases around Large Language Models (LLMs). The idea is to leverage the power of LLMs to tackle complex problems and generate solutions that are more than the sum of their parts.
One of the key features of the presentation is the application of the "Keymate.AI Search" plugin in conjunction with the Reasoning and Acting Chain of Thought (ReAct) framework. The presenter encourages the audience to utilize these tools to generate reasoning traces and actions. The ReAct framework, learned from an initial search, is then applied to these traces and actions, demonstrating the potential of LLMs to learn and apply complex frameworks.
The presentation also delves into the impact of climate change on biodiversity. The presenter prompts the audience to look up the latest research on this topic and summarize the key findings. This exercise not only highlights the importance of climate change but also demonstrates the capabilities of LLMs in researching and summarizing complex topics.
The presentation concludes with several key takeaways. The presenter emphasizes that specialized custom solutions work best and suggests a bottom-up approach to expert systems. However, they caution that over-abstraction can lead to leakages, causing time and money limits to hit early and tasks to fail or require many iterations. The presenter also notes that while prompt engineering is important, it's not necessary to over-optimize if the LLM is clever. The presentation ends on a hopeful note, expressing a need for more clever LLMs and acknowledging that good applications are rare but achievable.
Overall, the presentation provides a comprehensive overview of the LanGCHAIN framework, its applications, and the potential of LLMs in solving complex problems. It serves as a call to action for the audience to explore these tools and frameworks.
Langchain Framework is an innovative approach to linguistic data processing, combining the principles of language sciences, blockchain technology, and artificial intelligence. This deck introduces the groundbreaking elements of the framework, detailing how it enhances security, transparency, and decentralization in language data management. It discusses its applications in various fields, including machine learning, translation services, content creation, and more. The deck also highlights its key features, such as immutability, peer-to-peer networks, and linguistic asset ownership, that could revolutionize how we handle linguistic data in the digital age.
What drives Innovation? Innovations And Technological Solutions for the Distr...Stefano Fago
Social networking and social marketing drove innovation over the last 5 years by creating a need for [1] big data to support large user numbers, [2] high performance to support big data, and [3] high scalability to support growth. This need led to the development of new technologies for custom and polyglot persistence, streaming analytics, high performance through concurrency and parallelism, and scalability through distributed algorithms and systems. Enabling fast evolution required tools for rapid development, testing, and deployment as well as skilled and engaged employees.
Enterprise Blockchain is here. Businesses can leverage the power of blockchain to achieve trust, transparency, and accountability. But are developers equipped with the knowledge to learn the tech and solve enterprise needs? I talk about what enterprise blockchain is and how developers can get started with it using simple example
Pair programming involves two programmers working together at one computer. One person acts as the driver who types code while the other navigates and reviews. It has benefits like catching mistakes earlier, improving design quality, transferring knowledge between partners, and creating a stronger sense of team. While there is initially a 15% overhead in time, studies show this is outweighed by fewer defects and a more flexible system in the long run. Effective pair programming requires collaboration, respect, communication and regularly alternating roles.
The document provides an overview of parallelization using OpenMP. It discusses how parallel programming models have evolved with hardware to improve performance and efficiency. It describes shared memory and message passing models like OpenMP and MPI. The document compares OpenMP and MPI, detailing their pros and cons. It explains how OpenMP can be used to achieve parallelism on shared memory systems using compiler directives and libraries.
Getting started contributing to Apache SparkHolden Karau
Are you interested in contributing to Apache Spark? This workshop and associated slides walk through the basics of contributing to Apache Spark as a developer. This advice is based on my 3 years of contributing to Apache Spark but should not be considered official in any way.
Microseconds matter in High Frequency Trading. This document discusses how software development for high frequency trading (HFT) requires optimizations for ultra-low latency. HFT systems make profits from many small, quick trades executed at very high speeds. Latency is critical for HFT systems to capture profitable opportunities and avoid losses. The document outlines various techniques for optimizing C++ code for latency, such as removing unnecessary operations from hot code paths, using memory pools instead of dynamic memory allocation, avoiding string operations, and improving branch prediction. It also discusses the importance of measuring system performance at a microsecond level.
Voxxed Athens 2018 - Methods and Practices for Guaranteed Failure in Big DataVoxxed Athens
The document provides guidance on practices that can lead to failure in big data systems. It warns against assumptions that schemas are unnecessary, that databases can scale reads and writes infinitely, and that network connections and hardware will always be available. Instead, it recommends defining schemas and metadata, understanding database models, preparing for failures through testing, and managing resources and data pipelines. Proper data governance, partitioning, replication, and understanding of consistency and transaction models can help avoid failures.
Instant LAMP Stack with Vagrant and PuppetPatrick Lee
Do you enjoy installing and configuring Apache, PHP, and MySQL every time you reinstall your OS or switch to a new machine? Neither do I. And we never have to do it again. Vagrant can use the VirtualBox API and configuration defined in Puppet to spin up a development VM in a couple of minutes. And it's really easy to do. I'll start with the simplest possible example and work up to a cluster of VM's. Feel free to bring your laptop and follow along.
Speaker:
Alex Cruise (Dir. Architecture, Metafor Software)
Abstract:
The rise of the DevOps movement has brought into welcome focus something that is often learned only through painful experience and expense: the success of a software product critically depends not only on its implementation, maintenance and enhancement, but also on how it’s deployed and operated.
Distributed systems are hard, but you can’t escape them: you need to scale out, but wrapping proxy interfaces around remote resources so they look local is a recipe for a fragile system. Plus, as the complexity of components and services increases, local systems aren’t actually as reliable as we think! Concurrency is hard, but you can’t escape it: whether you’re using threads in a single process, or multiple processes on a single machine, you still need to synchronize state between them somehow. Fault tolerance is hard, but you can’t escape it: parts will fail, you need to cope without rebooting the whole application. Correctness is hard, but you can’t escape it: whether through laborious testing or a Sufficiently Advanced Compiler, you need to have some assurance that the software will work as intended.
Let’s talk about a set of architectural patterns (and, yes, frameworks) that can really help us achieve the goals of concurrency, fault tolerance and correctness, while affording us the flexibility we need to scale our deployments when we achieve terrifying success.
This document discusses micro optimizations in C++ and their effectiveness. It begins by defining micro optimizations and noting that the real bottlenecks are often not within one's own code. It then discusses reasons both for and against micro optimizations, noting they can improve performance if used judiciously but also complicate code. The document covers measuring efficiency and complications that arise in C, C++ and higher-level languages. It emphasizes the importance of understanding what languages do behind the scenes and focusing optimizations on the "fast path" code used most frequently.
Similar to 4Developers 2015: Java Memory Consistency Model or intro to multithreaded programming - Tomasz Borek, Jacek Jagieła (20)
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
6. Today...
● A little bragging
● Fallacy correction
● Memory model
● The Java one mostly
● Short advice what about it
● Lot of links for later reading
● Yeah, 45 minutes only :P
@LAFK_pl
Consultant @
7. Who knows (heard of)?
● Gene Amdahl?
● Gordon Moore?
● Leslie Lamporte?
● Bill Pugh?
● Sarita Adve?
● Hans Boehm?
● Martin Thompson?
● Aleksey Shipilev?
@LAFK_pl
8. Hands up, who...
● Doesn't program
● Knows Moore's law?
● Knows Amdahl's law?
● Can explain concurrency vs parallelism?
● Codes with mechanical sympathy?
● Tries to?
● Knows what mechanical sympathy is?
@LAFK_pl
16. JLS, section 17.4:
A memory model describes, given a program and
an execution trace of that program, whether the
execution trace is a legal execution of the
program. The Java programming language
memory model works by examining each read in
an execution trace and checking that the write
observed by that read is valid according to certain
rules.
@LAFK_pl
Consultant @
17. JLS, section 17.4:
A memory model describes, given a program and
an execution trace of that program, whether the
execution trace is a legal execution of the
program. The Java programming language
memory model works by examining each read in
an execution trace and checking that the write
observed by that read is valid according to
certain rules.
@LAFK_pl
Consultant @
20. Bill Pugh, Jeremy Manson
● Most cores have many cache layers
● What if 2 cores look at same value?
● Memory model defines when and who sees
what
● There're strong and weak models
● Strong guarantee seeing same things across whole
system
● Weak only sometimes, via barriers / fences
@LAFK_pl
Consultant @
21. Bill Pugh, Jeremy Manson:
What is a memory model, anyway?
At the processor level, a memory model defines
necessary and sufficient conditions for knowing
that writes to memory by other processors are
visible to the current processor, and writes by the
current processor are visible to other processors.
@LAFK_pl
Consultant @
22. So?
● Memory CONSISTENCY
● Allowed optimisations
● Possible executions of a (possibly
multithreaded!) program
● Which cores / threads see which values
● How to make it consistent for programmers
● What you're allowed to assume
@LAFK_pl
Consultant @
23. Fallacy #1
I don't need to worry about JMCM since
REALLY smart engineers crafted it.
@LAFK_pl
Consultant @
24. Half-true
I don't need to worry about JMCM since
REALLY smart engineers crafted it
@LAFK_pl
Consultant @
25. Smart, sure! But still:
● Smart people are still people
● JMCM is damn hard! Yeah, they botched it.
● Java <> JVM
● JMCM is for JVM... but with Java in mind
● NO tech is a talisman of functionality!
@LAFK_pl
Consultant @
26. JSR-133?
● Messed up final
● Spec not for humans
● Messed up double-locking
● Messed up volatile
● Each implementation on it's own
@LAFK_pl
Consultant @
34. Amdahl's law
@LAFK_pl
The speedup of a program using multiple
processors in parallel computing is limited by the
sequential fraction of the program. For example, if
95% of the program can be parallelized, the
theoretical maximum speedup using parallel
computing would be 20× as shown in the
diagram, no matter how many processors are
used.
Consultant @
35. Rok 1967, Gene Amdahl states:
@LAFK_pl
For over a decade prophets have voiced the
contention that the organization of a single
computer has reached its limits and that truly
significant advances can be made only by
interconnection of a multiplicity of computers in
such a manner as to permit cooperative solution.
Consultant @
36. Rok 1967, Gene Amdahl states:
@LAFK_pl
For over a decade prophets have voiced the
contention that the organization of a single
computer has reached its limits and that truly
significant advances can be made only by
interconnection of a multiplicity of computers
in such a manner as to permit cooperative
solution.
Consultant @
41. Where it matters
● Javac / Jython / ...
● JIT
● Hardware, duh!
● Each time: another
team
@LAFK_pl
Consultant @
42. Hardware
● Various ISA CPUs
● Number of registers
● Caches size or type, buses implementations
● Cache protocols (MESI, AMD's MOESI, Intel's...)
● How many functional units per CPU
● How many CPUs
● Pipeline:
● Instruction decode > address decode > memory fetch
> register fetch > compute ...
@LAFK_pl
Consultant @
45. Barriers / fences
„once memory has been pushed to the cache
then a protocol of messages will occur to
ensure all caches are coherent for any shared
data. The techniques for making memory
visible from a processor core are known as
memory barriers or fences.
– Martin Thompson, Mechanical Sympathy
differs per architecture / CPU / cache type!
@LAFK_pl
Consultant @
46. Barriers / Fences
● CPU instruction
● Means ”flush BUFFER now!”
● CMPXCHG (may be
lacking!)
● Forces update
● Starts cache coherency
protocols
● Read / Write / Full
@LAFK_pl
Consultant @
48. Doug Lea says:
@LAFK_pl
Consultant @
The best way is to build up a small repertoire of
constructions that you know the answers for and
then never think about the JMM rules again
unless you are forced to do so! Literally nobody
likes figuring things out of JMM rules as stated, or
can even routinely do so correctly. This is one of
the many reasons we need to overhaul JMM
someday.
49. Doug Lea advice:
@LAFK_pl
Consultant @
The best way is to build up a small repertoire
of constructions that you know the answers
for and then never think about the JMM rules
again unless you are forced to do so! Literally
nobody likes figuring things out of JMM rules as
stated, or can even routinely do so correctly. This
is one of the many reasons we need to overhaul
JMM someday.
50. Mechanical sympathy:
@LAFK_pl
Consultant @
● Cache lines misses hurt
● Going to main memory hurts
● Cycles are important
● L1, L2 caches are cheap but require cache
coherency protocols and memory barriers
● Not every hardware has all barriers
●
51. Gordon Moore
● Fairchild Semi-
conductors co-
founder
● ”Law author”
● Intel co-founder
@LAFK_pl
52. Gene Amdahl
● IBM fellow
● IBM & Amdahl
mainframes
● Coined law in 1967
@LAFK_pl
54. Bill Pugh
● FindBugs
● Java Memory Model
is broken
● Final - Volatile
● Double-checked
locking
● ”New” JMM
@LAFK_pl
55. Sarita Adve
● Java Memory Model
is broken
● Great many MCM
papers
● Best MCM def I found
@LAFK_pl
56. Martin Thompson
● Mechanical sympathy
blog & mailing list
● Aeron protocol
● Mechanical sympathy
proponent
@LAFK_pl
57. This wouldn't have happened if not
● Jarek Pałka, who kicked me out here some
time ago
● Those folks, who said ”make more” after the
lightning talk I've done
● Java Day Kiev 2014
@LAFK_pl
Consultant @
58. Not possible without:
● Leslie Lamport's works on distributed sistems
● Bill Pugh's work on JSR-133!
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html
● Sarita Adve's paperts, especially shared MCM
tutorial:
http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
@LAFK_pl
Consultant @
59. Terrific – and tough - reading
● Martin Thompson: Mechanical Sympathy (mailing list & blog)
● JEPS 188: http://openjdk.java.net/jeps/188
● Goetz et al: "Java Concurrency in Practice"
● Herilhy, Shavit, "The Art of Multiprocessor Programming"
● Adve, "Shared Memory Consistency Models: A Tutorial"
● Manson, "Special PoPL Issue: The Java Memory Model"
● Huisman, Petri, "JMM: A Formal Explanation"
● Aleksey Shipilev blog post: http://shipilev.net/blog/2014/jmm-pragmatics/
@LAFK_pl
Consultant @
60. Laws and related:
● Moore's ”law”:
http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf
● Rock's law: http://en.wikipedia.org/wiki/Rock's_law
● Amdahl's law:
● http://en.wikipedia.org/wiki/Amdahl%27s_law
● Validity of the Single-Processor Approach to Achieving Large-Scale Computing
Capabilities AFIPS Press, 1967
●
J.L. Gustafson, “Reevaluating Amdahl’s Law,” Comm. ACM, May 1988
● Pleasantly parallel problems:
http://en.wikipedia.org/wiki/Embarrassingly_parallel
@LAFK_pl
Consultant @
61. Special thanks
● Konrad Malawski and Tomek Kowalczewski,
these guys really dig that stuff
● Bartosz Milewski who helped me rediscover
Hans Boehm
@LAFK_pl
Consultant @