This presentation comes with many additional notes (pdf): http://de.slideshare.net/nicolayludwig/7-cpp-memory-representationpointerarithmetics-38509904
http://de.slideshare.net/nicolayludwig/7-cpp-memory-representationpointerarithmeticsexercises-38510699
- The octal and hexadecimal Numeral System
- Byte Order
- Memory Representation of Arrays and Pointer Arithmetics
- Array Addressing with "Cast Contact Lenses"
- The Heap: Segmentation and "Why do Programs crash?"
- How to understand and fix Bugs
- The Teddybear Principle
Graph terminologies & special type graphsNabeel Ahsen
The document discusses various graph terminologies and special types of graphs. It defines undirected and directed graphs, and describes degrees, handshaking theorem, and other properties. Special graph types covered include complete graphs, cycles, wheels, n-cubes, and bipartite graphs. It provides examples of constructing new graphs from existing ones and an application using a bipartite graph model for employee skills and job assignments.
Representation of binary tree in memoryRohini Shinde
There are two ways to represent a binary tree in memory: sequential representation which uses a single linear array to store the tree, and linked representation which uses three parallel arrays (INFO, LEFT, and RIGHT) along with a ROOT pointer to link nodes. The sequential representation stores the root at index 0 of the array and children at calculated indices, while the linked representation stores the data, left child index, and right child index of each node in the parallel arrays.
Von Gleichungen zu Funktionen
Überblick über ganzrationale Funktionen
Koordinatensysteme für Graphen ganzrationaler Funktionen mit Excels Liniendiagrammen erstellen
Einfache Analyse ganzrationaler Funktionen anhand deren Graphen
Von linearen zu quadratischen Gleichungssystemen
Verschiedene Möglichkeiten quadratische Gleichungssysteme grafisch zu lösen
Koordinatensysteme für quadratische Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Grafische Interpretation der Lösungen von Normalparabel-Gerade Kombinationen
Diagramme
Einführung linearer Gleichungssysteme mit zwei Unbekannten
Rechnerische und grafische Lösung linearer Gleichungssysteme
Wertetabellen mit Excel erstellen
Koordinatensysteme und lineare Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Mit großen Tabellen arbeiten
Sortieren und Filtern
Objekte einfügen
Formeln und rechnen mit Excel, insbesondere Zahlen und Textbearbeitung
Relative und absolute Zellbezüge
Funktionen: SUMME(), ANZAHL(), MIN(), MAX(), MITTELWERT(), JETZT(), HEUTE(), ZUFALLSZAHL(), PI() und SUMMEWENN()
Mathematische Probleme in Tabellenform
Geschichtliche Entwicklung
Grundlegende Konzepte und Begriffe in Excel
Selektion, Dateneingabe und Datentypen
Zellformatierung und Inhaltsformatierung
This document discusses the history and evolution of graphical user interfaces (GUIs) in web browsers. It begins by explaining the concepts of vector-based and raster-based graphics. It then describes early approaches using bare HTML pages, which felt like navigation rather than a true application. Plugins were introduced to provide richer content but had problems with resources, installation, and security. Finally, the document introduces HTML5 Canvas as a solution for continuous, event-driven drawing without plugins, allowing single-page web applications.
Graph terminologies & special type graphsNabeel Ahsen
The document discusses various graph terminologies and special types of graphs. It defines undirected and directed graphs, and describes degrees, handshaking theorem, and other properties. Special graph types covered include complete graphs, cycles, wheels, n-cubes, and bipartite graphs. It provides examples of constructing new graphs from existing ones and an application using a bipartite graph model for employee skills and job assignments.
Representation of binary tree in memoryRohini Shinde
There are two ways to represent a binary tree in memory: sequential representation which uses a single linear array to store the tree, and linked representation which uses three parallel arrays (INFO, LEFT, and RIGHT) along with a ROOT pointer to link nodes. The sequential representation stores the root at index 0 of the array and children at calculated indices, while the linked representation stores the data, left child index, and right child index of each node in the parallel arrays.
Von Gleichungen zu Funktionen
Überblick über ganzrationale Funktionen
Koordinatensysteme für Graphen ganzrationaler Funktionen mit Excels Liniendiagrammen erstellen
Einfache Analyse ganzrationaler Funktionen anhand deren Graphen
Von linearen zu quadratischen Gleichungssystemen
Verschiedene Möglichkeiten quadratische Gleichungssysteme grafisch zu lösen
Koordinatensysteme für quadratische Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Grafische Interpretation der Lösungen von Normalparabel-Gerade Kombinationen
Diagramme
Einführung linearer Gleichungssysteme mit zwei Unbekannten
Rechnerische und grafische Lösung linearer Gleichungssysteme
Wertetabellen mit Excel erstellen
Koordinatensysteme und lineare Graphen mit Excels Liniendiagrammen erstellen und Gleichungen damit grafisch lösen
Mit großen Tabellen arbeiten
Sortieren und Filtern
Objekte einfügen
Formeln und rechnen mit Excel, insbesondere Zahlen und Textbearbeitung
Relative und absolute Zellbezüge
Funktionen: SUMME(), ANZAHL(), MIN(), MAX(), MITTELWERT(), JETZT(), HEUTE(), ZUFALLSZAHL(), PI() und SUMMEWENN()
Mathematische Probleme in Tabellenform
Geschichtliche Entwicklung
Grundlegende Konzepte und Begriffe in Excel
Selektion, Dateneingabe und Datentypen
Zellformatierung und Inhaltsformatierung
This document discusses the history and evolution of graphical user interfaces (GUIs) in web browsers. It begins by explaining the concepts of vector-based and raster-based graphics. It then describes early approaches using bare HTML pages, which felt like navigation rather than a true application. Plugins were introduced to provide richer content but had problems with resources, installation, and security. Finally, the document introduces HTML5 Canvas as a solution for continuous, event-driven drawing without plugins, allowing single-page web applications.
The document discusses different approaches to drawing graphics on a computer screen, including raster-based and vector-based graphics. It focuses on raster-based drawing APIs and their evolution from native platform-specific APIs to platform-independent APIs in web browsers. Key developments included the use of browser plugins, problems with plugins, and the introduction of HTML5 Canvas which enables interactive drawing in browsers without plugins.
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
This document discusses new features in C# 4 including home-brew dynamic dispatch using the DynamicObject class. It allows implementing custom dynamic behavior by overriding methods like TryInvokeMember. The document also covers hosting scripting languages with the Dynamic Language Runtime (DLR), including IronPython, IronRuby, and IronScheme. Dynamic dispatch enables seamless collaboration and controlled isolation between .NET and DLR-based languages.
This document discusses new features in C# 4 related to dynamic coding. It describes how dynamic typing allows easier COM automation without needing interop types. Anonymous type instances can now be passed to methods using dynamic parameters. The document also discusses dynamic objects in JavaScript and how ExpandoObjects in .NET allow objects to have dynamically added and modified properties at runtime similar to dynamic objects in JavaScript.
This document discusses the Dynamic Language Runtime (DLR) and dynamic coding features in C# 4. It provides an overview of the DLR and how it allows interoperability between statically and dynamically typed languages on the .NET framework. The DLR transforms dynamic operations in C# into calls to the DLR at compile time and handles dynamic dispatch at runtime. It uses expression trees to represent operations in a language-agnostic way and caches binding results for improved performance.
This document discusses new features related to dynamic programming in C# 4. It begins by explaining why dynamic typing is an important new feature in .NET 4 due to increasing needs for interoperability. It then provides an overview of dynamic typing concepts like late binding and duck typing. The document shows how these concepts are implemented in VB for dynamic programming and how C#4 introduces dynamic typing capabilities through the new "dynamic" keyword and Dynamic Language Runtime (DLR) while still being a statically typed language. It discusses the basic syntax for using dynamics in C#4 and some restrictions.
This document summarizes new generic types and features in C# 4, including Lazy<T> for deferred initialization, tuples for ad-hoc data structures, and generic variance. Generic variance allows covariant and contravariant conversions between generic types to promote substitutability. It was enabled for interfaces and delegates in C# 4 through explicit variance declarations like "out T" and "in T". This improves type safety over arrays, which allow unsafe covariance in C#.
This document discusses LINQ (Language Integrated Query) features in C#, including introducing LINQ to Objects, basic query expressions for projection and selection, and anonymous types. It provides examples of how LINQ to Objects maps to extension methods, functional and declarative expressibility using LINQ, details about the LINQ translation process from query expressions to method calls, and examples of using anonymous types.
The document discusses LINQ and C# algorithms. It compares the C++ STL algorithm model to the .NET algorithm model using IEnumerable<T> sequences and extension methods. Key points include:
- The .NET model uses deferred execution via IEnumerable<T> sequences and iterator blocks, avoiding intermediate collections and allowing multiple operations to be chained.
- Extension methods allow algorithms to be expressed via method chaining in a declarative way rather than using loops.
- IEnumerable<T> sequences are more readable, uniform, and composable than the STL model due to chaining and deferred execution.
This document summarizes new C# 3.0 features including implicit typing, lambda expressions, and extension methods. Implicit typing allows declaring variables without an explicit type using the 'var' keyword. Lambda expressions provide a concise way to pass code as arguments using the '=>' operator. Extension methods allow adding methods to existing types without modifying them by defining methods in static classes that take the extended type as the first parameter.
The document discusses new features in C#2 for iterators and nullable types. It describes how C#2 introduced iterator blocks using the "yield" keyword to more easily implement iterators. This allows iterators to be defined without having to implement interfaces. The document also explains how C#2 introduced the Nullable type to allow value types to represent null values. It can be used with the ? syntax sugar and supports lifting operators to work with nullable types.
This document summarizes features introduced in C# 2, including delegates, anonymous methods, and delegate variance. Delegates allow passing methods as arguments and are used for events. C# 2 simplified delegate instantiation and introduced anonymous methods, which define inline delegate code without a named method. Underneath, anonymous methods use compiler-generated classes to capture outer scope variables. Delegate variance allows compatibility between delegate types and instances if their return and parameter types are related through inheritance.
(7) c sharp introduction_advanvced_features_part_iiNico Ludwig
This document provides an overview of advanced C# features including collections, delegates, events, custom attributes, and reflection. It discusses how generic collections allow type-safe storage and access of elements compared to object-based collections. Delegates are described as a way to pass methods as arguments to other methods. Events allow objects to notify observers of state changes through delegate-based callbacks. Custom attributes provide a way to annotate types and members with metadata, and reflection enables examining types at runtime to access this metadata and other structural details.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
The document discusses different approaches to drawing graphics on a computer screen, including raster-based and vector-based graphics. It focuses on raster-based drawing APIs and their evolution from native platform-specific APIs to platform-independent APIs in web browsers. Key developments included the use of browser plugins, problems with plugins, and the introduction of HTML5 Canvas which enables interactive drawing in browsers without plugins.
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
- Wires and Bulbs
- Batch Processing
- Terminal and Mainframe
- From the Command-Line to Killer Applications
- Vector Displays and Raster Displays
- Color Displays
- The Mouse and the Takeoff of Interactivity
- The Desktop Metaphor
This document discusses new features in C# 4 including home-brew dynamic dispatch using the DynamicObject class. It allows implementing custom dynamic behavior by overriding methods like TryInvokeMember. The document also covers hosting scripting languages with the Dynamic Language Runtime (DLR), including IronPython, IronRuby, and IronScheme. Dynamic dispatch enables seamless collaboration and controlled isolation between .NET and DLR-based languages.
This document discusses new features in C# 4 related to dynamic coding. It describes how dynamic typing allows easier COM automation without needing interop types. Anonymous type instances can now be passed to methods using dynamic parameters. The document also discusses dynamic objects in JavaScript and how ExpandoObjects in .NET allow objects to have dynamically added and modified properties at runtime similar to dynamic objects in JavaScript.
This document discusses the Dynamic Language Runtime (DLR) and dynamic coding features in C# 4. It provides an overview of the DLR and how it allows interoperability between statically and dynamically typed languages on the .NET framework. The DLR transforms dynamic operations in C# into calls to the DLR at compile time and handles dynamic dispatch at runtime. It uses expression trees to represent operations in a language-agnostic way and caches binding results for improved performance.
This document discusses new features related to dynamic programming in C# 4. It begins by explaining why dynamic typing is an important new feature in .NET 4 due to increasing needs for interoperability. It then provides an overview of dynamic typing concepts like late binding and duck typing. The document shows how these concepts are implemented in VB for dynamic programming and how C#4 introduces dynamic typing capabilities through the new "dynamic" keyword and Dynamic Language Runtime (DLR) while still being a statically typed language. It discusses the basic syntax for using dynamics in C#4 and some restrictions.
This document summarizes new generic types and features in C# 4, including Lazy<T> for deferred initialization, tuples for ad-hoc data structures, and generic variance. Generic variance allows covariant and contravariant conversions between generic types to promote substitutability. It was enabled for interfaces and delegates in C# 4 through explicit variance declarations like "out T" and "in T". This improves type safety over arrays, which allow unsafe covariance in C#.
This document discusses LINQ (Language Integrated Query) features in C#, including introducing LINQ to Objects, basic query expressions for projection and selection, and anonymous types. It provides examples of how LINQ to Objects maps to extension methods, functional and declarative expressibility using LINQ, details about the LINQ translation process from query expressions to method calls, and examples of using anonymous types.
The document discusses LINQ and C# algorithms. It compares the C++ STL algorithm model to the .NET algorithm model using IEnumerable<T> sequences and extension methods. Key points include:
- The .NET model uses deferred execution via IEnumerable<T> sequences and iterator blocks, avoiding intermediate collections and allowing multiple operations to be chained.
- Extension methods allow algorithms to be expressed via method chaining in a declarative way rather than using loops.
- IEnumerable<T> sequences are more readable, uniform, and composable than the STL model due to chaining and deferred execution.
This document summarizes new C# 3.0 features including implicit typing, lambda expressions, and extension methods. Implicit typing allows declaring variables without an explicit type using the 'var' keyword. Lambda expressions provide a concise way to pass code as arguments using the '=>' operator. Extension methods allow adding methods to existing types without modifying them by defining methods in static classes that take the extended type as the first parameter.
The document discusses new features in C#2 for iterators and nullable types. It describes how C#2 introduced iterator blocks using the "yield" keyword to more easily implement iterators. This allows iterators to be defined without having to implement interfaces. The document also explains how C#2 introduced the Nullable type to allow value types to represent null values. It can be used with the ? syntax sugar and supports lifting operators to work with nullable types.
This document summarizes features introduced in C# 2, including delegates, anonymous methods, and delegate variance. Delegates allow passing methods as arguments and are used for events. C# 2 simplified delegate instantiation and introduced anonymous methods, which define inline delegate code without a named method. Underneath, anonymous methods use compiler-generated classes to capture outer scope variables. Delegate variance allows compatibility between delegate types and instances if their return and parameter types are related through inheritance.
(7) c sharp introduction_advanvced_features_part_iiNico Ludwig
This document provides an overview of advanced C# features including collections, delegates, events, custom attributes, and reflection. It discusses how generic collections allow type-safe storage and access of elements compared to object-based collections. Delegates are described as a way to pass methods as arguments to other methods. Events allow objects to notify observers of state changes through delegate-based callbacks. Custom attributes provide a way to annotate types and members with metadata, and reflection enables examining types at runtime to access this metadata and other structural details.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
2. 2
TOC
● (7) C++ Basics
– The octal and hexadecimal Numeral System
– Byte Order
– Memory Representation of Arrays and Pointer Arithmetics
– Array Addressing with "Cast Contact Lenses"
– The Heap: Segmentation and "Why do Programs crash?"
– How to understand and fix Bugs
– The Teddybear Principle
● Sources:
– Bjarne Stroustrup, The C++ Programming Language
– Charles Petzold, Code
– Rob Williams, Computer System Architecture
– Jerry Cain, Stanford Course CS 107
3. 3
The octal Numeral System
● The idea is to compact bit patterns to lesser digits per value:
– Binary values can be written in groups of 3b, i.e. triples.
– A triple can represent 23
= 8 values, it can be simply written as a single digit then.
– => We need a numeral system with the base of 8 to get matching digits.
● The numeral system on the base 8 is called octal numeral system.
– A single octal digit can hold one of the symbols [0, 7].
– To represent octal digits, no extra numeric symbols need to be added.
– The mathematical notation of octal literals looks like this: 4548
or 454oct
or 454eight
● In C/C++ integer literals can be written as octal number with the 0-prefix.
short s = 300; 0 100 s (300)
short s = 0454; // s (300)
101100000000
4540 00
4. 4
The octal Numeral System – There is a Problem
● The octal system was used, when computer systems applied 24b data words.
– It was used, because 24 is a perfect multiple of three (bits).
● But today we have data words of 16b, 32b and 64b.
– None of these is a perfect multiple of three!
● Problem: triple-patterns can be applied in different ways on non-multiples of 3b.
– This leads to different possible representations of the same bit pattern:
● => The octal system is no longer suitable for modern data words...
– ...but a solution is in sight!
0 100101100000000
45
40 00
10010100
0
00 001000
0 10
45
s (300)
interpreted as 2x1B pattern
interpreted as complete pattern
s (300)
5. 5
The hexadecimal Numeral System
● In previous pictures, binary values have been written in groups of 4b very often.
– We call these "half bytes" nibbles or tetrads. Two tetrads make one octet – the byte!
– A tetrad can represent 24
= 16 values, it can simply be written as a single digit then.
– => We need a numeral system with the base of 16 to get matching digits.
● The numeral system on the base 16 is called hexadecimal (short "hex") numeral system.
– A single hexadecimal digit can hold one of the symbols [0, 9] or [A, F] (or [a, f], the case does not matter).
– And a single hexadecimal digit can be directly represented by a tetrad.
– The mathematical notation of literals looks like this: 12C16
12Chex
or 12Csixteen
or 12Ch
● In C/C++ integer literals can be written as hex number with the 0x-prefix.
● In the depicted memory view there is the 300 in hex! But...
something is wrong: the digits are somehow "mirrored"?
– We'll discuss the "effect" of byte order in short!
short s = 300; s (300)
short s = 0x12C; // s (300)
C210
0000 0001 0010 1100
Em, 12C16
… Huh?
6. 6
Numbers and the Memory
● I've just overheard this statement "Numbers are stored hexadecimally in a computer's memory!" - No! That's nonsense!
● The next "correction" was "Oh, I mean numbers are stored binary in a computer's memory!" - No, it's the same nonsense!
● Folks! A computer stores data as tiny pieces of electric attributes, e.g. voltage, in electronic circuits. That's all!
– Permanent memory also uses magnetic (e.g. tape), optic (e.g. DVD) or meanwhile also electric (flash) attributes.
● If these attributes have discretely distinguishable values, these are interpreted as different states, often just "on" and "off".
● Computer scientists, programmers and software interpret "on" and "off" as information.
– Software can interpret series of "on" and "off" information-pieces and groups as numbers.
– Sure, the same number can be written in the binary, octal, hexadecimal or decimal numeral system!
Memory is an arrangement of
electric, magnetic or optic attributes.
Memory
300
0 100101100000000
4540 00
C210
Representation in the memory viewRepresentation as numbers
7. 7
Why do I need to understand Memory Representation?
● Today, programming is very comfortable.
● Understanding "the metal" helps understanding pointers.
● Understanding "the metal" helps understanding errors.
● Understanding the past helps understanding presence and future.
● Don't believe in magic! Try to understand memory!
8. 8
Memory Representation – Byte Order – Part I
● Let's introduce some terms for the byte-components of a value:
– The byte storing the largest portion of the value is called most significant byte (MSB).
– The byte storing the smallest portion of the value is called least significant byte (LSB).
– But how is s' value represented in memory?
● The value of s occupies 2B in memory. The type short is a multibyte type.
– But whether MSB or LSB is stored on the higher memory-address is not standardized!
● A big-endian architecture stores the LSB at the higher memory address.
● A little-endian architecture stores the MSB at the higher memory address.
short s = 300; s (300)01 2C
01 2C
2C 01
higher addresses
s (300)
s (300)
Aha! It's a hexadecimal
little-endian representation of 300!
9. 9
Memory Representation – Byte Order – Part II
● Benefits of big-endian (68K, PPC, SPARC):
– Better readability: the values are represented in the way they'd be written as literals.
● Benefits of little-endian (x86):
– Scalability: if a value in memory needs resizing, the memory address is kept (like short to int).
● Relevance of the byte order in C++:
– For the representation of multibyte values having sequential bytes (a memory-block).
● I.e. for values of size > 1B (e.g. int or short (but not arrays, esp. c-strings, for pointer arithmetics))
– Functions like std::memcpy() operate on the exact underlying byte order.
● Irrelevance of the byte order in C++:
– The address-operator (&) always returns the address of the lowest-addressed byte.
– The assignment-operator (=) works always correctly.
– Bit-operations and conversions are effective on values, not their byte order!
10. 10
Memory Representation of Arrays
● The elements of a reside in memory as sequential block with space for ten ints.
– The array symbol a represents the base address of the array.
– The 1st
element of a (a[0]), which is also a's base address, resides at the lowest address.
● The size and count of elements of non-dynamic arrays:
● The address of the array (a (not &a)) is the same as of its first element (&a[0]).
– The symbol a does itself represent the address of the array a.
– W/o context, the address of a could be the address of an int or of an int-array.
int a[10]; ? ? ? ? ? ? ? ? ? ? a
a[0] = 44;
a[9] = 100;
4B
2C 64
std::size_t arraySize = sizeof(a); // Get the size of the array a (40).
int nElements = arraySize/sizeof(a[0]); // Get the count of elements in a (10).
40B
11. 11
Arrays and Pointer Arithmetics – Offset Addressing
● As a represents a pointer to the first element, the index can be seen as offset.
– It is possible to access elements via offset calculations against a.
– (Keep in mind that this is valid: a == &a[0])
– This allows so called pointer arithmetics. – This is a very important topic in C/C++!
– Pointer arithmetics is only meaningful in arrays.
● So, adding a number to the pointer a is interpreted as an offset like this:
● Pointer arithmetics for array offset addressing works like this:
– The sizeof(a[0]) * 1 is added to a's address to access the element at the index 1.
– This means that a + 1 is the calculated address of a[1]. => a + k == &a[k]
– In the example above the 1 is interpreted as an offset of 4B to the pointer a.
int a[] = {1, 2, 3, 4}; ? 2 3 4 a
4B
1
*(a + 1) = 6; // same as a[1] = 6 /* ...or via the pointer: */ *(++ap) = 6;
int* ap = a;
12. 12
Arrays and Pointer Arithmetics – Element Distance
● If we have the addresses of two elements of the array we can get their distance.
– So, subtracting two pointers to array elements is interpreted as their distance like this:
● Pointer arithmetics for array element distance works like this:
– The addresses of the elements a[1] (e1) and a[3] (e2) are pointers to int.
– Then just the count of ints "fitting between" both pointers is "measured".
– This means that (e1 + d) boils down to e2, or &a[1] + d == &a[3].
– The result of a pointer subtraction is a value of the (signed) type std::ptrdiff_t.
int a[] = {1, 2, 3, 4}; ? 2 3 4 a
4B
1
int* e1 = &a[1];
int* e2 = &a[3];
std::ptrdiff_t d = e2 - e1;
e1 e2
e2 - e1 d (2)
13. 13
Pointer Arithmetics, Pointer and Value Identity
● Now we understand how the []-operator boils down to pointer arithmetics.
– This also explains why we can use the []-operator with dynamic arrays.
● Some additional facts/features make pointer arithmetics work correctly:
– All pointers can be compared against 0.
– Pointers of equal type can be compared to other pointers of equal type.
– Equal pointers represent the same address, thus the dereferenced values are equal.
● We call this feature pointer identity.
● Following "equations" are valid for pointer/value identity with pointer arithmetics:
– a == &a[0] (pointer identity)
– (a + k) == &a[k] && &a[k] == &k[a] (pointer identity)
– *a == a[0] (value identity after dereferencing)
– *(a + k) == a[k] && a[k] == k[a] (value identity after dereferencing)
int a[] = {1, 2, 3, 4};
int k = 2;
14. 14
Arrays and foreign Memory
● We call the pointer to the array element a[length] the past-the-end-pointer.
– It points to the first "element" not belonging to the array.
– It has a special meaning in C/C++.
– C/C++ support no bounds checking, but the past-the-end-pointer can be safely read.
● Writing memory, which we don't own is still illegal.
– The local variables of a function are packed into a so called stack frame (sf).
– Writing to array-foreign memory could modify the values of adjacent variables in the sf.
a[length] = 7;
? ? ? ? ? ? ? ? a
a[-1] = 3;
? ? 73
length
const int length = 10;
int a[length];
15. 15
An Array with Cast Contact Lenses on
● The expression a[2] is calculated to be effectively at the address a + (2 * sizeof(int)).
– The value at this offset (in memory), which has space for one int, is set to 128.
● Then we put cast contact lenses of type short on:
– The expression reinterpret_cast<short*>(a)[5] is calculated to be effectively at the address a + (5 * sizeof(short)).
– The value at this offset, which is assumed to have space for one short, is set to 2.
● Taking the contact lenses off again: we've modified the higher address 2B of the value of a[2].
– The int at a[2] has a completely different value from 128 or 2!
int a[4];
2B
a[2] = 128;
reinterpret_cast<short*>(a)[5] = 2;
? ? ? ? ? ? ? ? a
4B
? ? ? ? ? ? ?? ??
? ? ? ? ? ? 80 0 a? 0 ? ? ? ?? ??
80 0 0 0
02
a + 2 * 4B
a + 5 * 2B
a[2] (128)
a[2] (131200)
a[2]
16. 16
Memory Segmentation – Geometric Memory Properties
● Let's assume a pointer has a size of 4B.
● The stack segment
– stores local (auto) variables,
– manages the stack of function calls and
– is owned and managed by hardware.
● The heap segment
– is managed by the heap manager and
– is owned and managed by software.
● std::malloc(), std::realloc(), std::free() etc.
Stack segment
Heap segment
0
232
- 1
void* a = std::malloc(80);
void* b = std::malloc(40);
80B 40B
void* c = std::realloc(b, 100);
100B
Code segment
Data segment
17. 17
Heap segment
Why do Programs crash?
● Segmentation fault:
– Happens, if we dereference a bad pointer.
– The 0-pointer is not part of any segment.
– This also happens on "dereferencing very small numbers..."
● Bus error:
– Happens, if we dereference a pointer having an unexpected location.
– Here the runtime may spot the error, as shorts usually reside on even addresses.
● But vp was pointing into a segment, so this is no segmentation fault.
0
char* ch = 0;
char c = *ch; // Dereferencing the 0-pointer...
ch
void* vp; // The pointer vp will be initialized with rubbish.
*reinterpret_cast<short*>(vp) = 42; // Dereferencing will fail for a chance of 50% (see explanation).
18. 18
Why do Programs crash? – A 20k Miles Perspective
● A crash is a fatal error, maybe due to
– a hardware malfunction or
– a logical software error (let's call this a software bug or simply a bug).
● Let's focus on bugs, why can we have bugs in our software?
– Bad values, e.g. invalid input or uninitialized memory.
– Unwarranted assumptions, e.g. infrastructure problems like "disc full".
– An otherwise faulty logic.
● This is relevant for C/C++, as there is no guarantee on misusing features!
● Help yourself: Trace a bug with tools:
– Analyze present stacktraces and logs. - Often customers can provide them, if the program produced such information.
– Use "printf()-debugging" and/or IDE-debugging.
– Create and run unit tests before and after the error was found.
19. 19
Nailing down Bugs – Simple but useful Tips
● 1. Bug-finding needs to be done systematically.
● 2. Make assumptions about the bug or problem!
● 3. We should be utterly critical about our assumptions and hold nothing for granted!
– We have always to check user input!
– Forgetting to program defensively is a major source of errors!
– Compile with highest warning level!
● 4. Don't panic!
● 5. Do it with pair programming.
● 6. Explain the bug or problem to another person, or...
20. 20
The Teddybear Principle
● This is a serious idea for problem solving!
● If we have a bug or problem: We should first talk to the teddybear!
– It is often helpful to reflect the problem in question with a peer.
– Notice that successful problem solvers often have sidekicks!
● Sherlock Holmes → Dr. John Watson
● Dr. Gregory House → Dr. James Wilson
● Only if the teddybear provides no answer: We should ask a trainer.
21. 21
Allocation from the Heap – normative Process
● The heap can be seen as a large array. (Here assuming that it is totally free.)
● Memory allocation from the heap works like so (simplified):
– Search from the beginning of the heap for a big enough free block of memory.
– Record this allocated space (address and size) somewhere.
– Return the address of the allocated space.
● The memory for b can be allocated right "after" a's memory.
– In principle the procedure is exactly like that for a's memory allocation.
● The heap manager may use any other heuristics to make it faster...
void* a = std::malloc(40);
40B
void* b = std::malloc(60);
60B
22. 22
Freeing from the Heap and Fragmentation
● Freeing memory from the heap:
– Mark the memory addressed by a as free.
– The contents (bit pattern) of the memory addressed by a are not touched!
– But the contents have no longer a meaning!
● Next allocation:
– The heap manager will start to find a free space (at the beginning) of the heap.
– There is a free memory block at the beginning, but it is too small (40B).
– The next free memory block is right after b + 60.
● Since different block sizes get allocated the heap fragments during usage.
std::free(a);
40B 60B40B
void* c = std::malloc(45);
45B
23. 23
Final Words on the Heap
● Every process thinks it owns the whole memory of the machine.
– At the start of an application the bounds of the heap are passed to the heap manager.
● There exist different heap managing and optimization strategies, e.g.:
– 1. The size of a heap block's memory is stored in a header field of the block.
● This means that the size of the allocated block is indeed a little bit larger than required.
– 2. A set of void* to the heap are managed in the heap manager to check freeing.
– 3. Segmented heap with segments for blocks of different sizes -> less fragmentation.
● Don't rely on proprietary features of the heap.
? ? ? ?
4B
4
20B
void* v = std::malloc(4 * sizeof(int));
v
size
Why do we introduce the octal numeral system here?
Binary numbers can be quickly transformed into octal digits and vice versa. The conversion of groups of 3b makes this possible.
It is used in some computer systems to enhance the readability of values under certain circumstances (24b data words) - It is esp. used in Unix&apos; file permission system.
Sometimes the octal numeral system does also play a role in encoding of (serial) data communication, where 3b-wise encoding can be found often.
Notice, that after the 2x1B pattern application only the rightmost triples retain their representation.
Sometimes in long binary numbers the tetrads are separated by dashes to enhance the readability.
Why do we introduce the hexadecimal numeral system here?
Binary numbers can be quickly transformed into hexadecimal digits and vice versa. The conversion of groups of 4b makes this possible.
Esp. we need it to understand the contents of the memory view (i.e. memory dumps or &quot;raw&quot; views of data (e.g. network traffic) in the IDE&apos;s debugger), which mostly uses a hexadecimal presentation.
C++14 introduced binary integer literals with the 0b-prefix. - Before that, programmers could use special libraries like Boost, which provide macros to use binary literals. (Binary integer literals can be defined in Java 7 with the 0b-prefix as well.)
Where do we need the hexadecimal and octal numeral system?
Well, the decimal numeral system uses values that are often not dividable by a power of two. As binary numbers can be written in blocks, which are powers of two, the hexadecimal and octal numeral system can compress the projection of numbers very nicely.
Memory addresses are almost always written as hexadecimal numbers.
Media access control (MAC) addresses are typically written as six dash-separated two-digit hex numbers.
Do you know other numeral systems?
Esp. important: the roman system and the sexagesimal system, based on the value 60 used e.g. by the babylonians, who also introduced the 360°, taken, because of a year having about 360 days (still used today to display clock time and angles).
The hexadecimal representation of numbers will be used occasionally in this course.
Sometimes in long binary numbers the tetrads are separated by dashes to enhance the readability.
Why do we introduce the hexadecimal numeral system here?
Binary numbers can be quickly transformed into hexadecimal digits and vice versa. The conversion of groups of 4b makes this possible.
Esp. we need it to understand the contents of the memory view (i.e. memory dumps or &quot;raw&quot; views of data (e.g. network traffic)), which mostly uses a hexadecimal presentation.
(The case of the letter symbols [A, F] does not matter for the value.)
There are no binary integer literals in C/C++11! - There are, however special libraries like Boost, which provide macros to use binary literals. (Binary integer literals can be defined in Java 7 with the 0b-prefix.)
Where do we need the hexadecimal and octal numeral system?
Well, the decimal numeral system uses values that are often not dividable by a power of two. As binary numbers can be written in blocks, which are powers of two, the hexadecimal and octal numeral system can compress the projection of numbers very nicely.
Memory addresses are almost always written as hexadecimal numbers.
Media access control (MAC) addresses are typically written as six dash-separated two-digit hex numbers.
Do you know other numeral systems?
Esp. important: the roman system and the sexagesimal system, based on the value 60 used e.g. by the babylonians, who also introduced the 360°, taken, because of a year having about 360 days (still used today to display clock time and angles).
The hexadecimal representation of numbers will be used occasionally in this course.
Another example for the significance of the parts of a value: for the clock time represented as hh:mm:ss the hh-portion is the most significant part of the clock time value. → The clock time is encoded as big-endian.
Interestingly date values are written in completely different orders depending on the culture/country/locale:
US: MM/DD/YYYY (middle-endian)
Germany: DD.MM.YYYY (little-endian)
ISO-8601: YYYY-MM-DD (big-endian)
The terms big/little-endians stem from the book &quot;Gulliver&apos;s Travels&quot; (Jonathan Swift). In the story the folk of Lilliput is required to open boiled eggs on the small end (little-endians), whereas in the rival kingdom of Belfuscu the folk is required to open them on the big end (big-endians). - The picture was used by the engineer Danny Cohen to explain the difficulties of byte order.
The war of byte orders of multibyte data is fought since the 70s when Intel (x86) and Motorola (68K) presented their products.
The readability of big-endians is relevant for memory dumps.
There are also mixed byte orders (bi-endian) and (rare) completely different byte orders.
Bi-endians (IA-64) can switch the byte order and can have a different byte order in different memory segments.
Different byte orders store integral and floating point number in different byte orders, or they store floating point values as big/little-endian mix.
The date format in the US uses middle-endian: MM/DD/YYYY.
C-strings are always stored as big-endians.
Relevance in other areas:
For registers the byte order is not relevant, the rightmost byte is always the LSB.
As the byte order influences the stack layout, certain sorts of bugs show up differently depending on the byte order. We&apos;ll discuss this in a future lecture.
If systems having different byte orders exchange data via a network, the byte order conversion (just on the byte copy layer) takes place in the network drivers. The byte order of the network protocol needs to be fixed to make this function (TCP/IP uses big-endian, so even little-endian machines need to convert information to big-endian to communicate with each other!). The byte order of the IP-family of protocols is big-endian. On the same machine there is of course no problem.
I/O on files from such different system is implemented via a compatibility layer.
A byte order mark (BOM, the 2B sequences &quot;FE FF&quot; for big-endian and &quot;FF FE&quot; for little-endian) is used at the beginning of a stream of text encoded as UTF-16 or UTF-32, it allows the receiver to interpret the text correctly.
In which &quot;unit&quot; does the sizeof operator return its result?
In std::size_t, a std::size_t of value 1 represents the sizeof(char).
The expression sizeof(a[0]) is ok for non-dynamic arrays, because they can not have zero elements.
Instead of sizeof(a) we could also get the value of sizeof(int[10]) as both values need to be equal (in the latter form we are required to write the argument for sizeof in parentheses, because int[] is a type).
We can not get the address of the array a by writing &a, because the symbol a is not an l-value. The symbol a itself represents the address of the array a.
When we pass an array to a function, we are really passing its address (which is also the address of the first element) to the function. - We need to pass the length of an array separately.
In C/C++ the argument of the subscript is virtually not an index, it is rather an offset from the address of the array, which is the address of the very first array element.
The pointer arithmetics works, because the scalar number being added to or subtracted from a pointer is interpreted to scale for the type of the pointer we are operating on.
Pointer arithmetics is an important topic in C++, because it abstracts the functionality of builtin types to be idiomatically compatible to the Standard Template Library (STL).
The type std::ptrdiff_t is defined in &lt;cstddef&gt;.
The way pointer arithmetics work does also explain why the void* pointing to array created on the heap needs to be casted to a concrete type. - Why?
Because the operations with pointer arithmetics need to know the size of the elements to calculate correct offsets, a void* is just a generic pointer to a block of memory. It explains also why a void* can not be dereferenced.
In the following examples we&apos;ll have to use reinterpret_casts (static_casts can not convert from int* to short* as in this example).
Why are the memory portions modified in the presented order?
Because of the byte order. (We assume little endian in this case, as the MSB is right from the LSB.)
We could go really crazy with casting and navigation through memory, the possible combinations are endless. But it is often also pointless and maybe dangerous.
Why do we draw a memory of size 232?
If the pointer&apos;s size is 4B, the width of the address bus (count of address-&quot;wires&quot;) must be 32b, this makes 232 different bytes to be addressable.
This memory model:
Keeping data and code in the same memory is an important aspect of the &quot;von Neumann architecture&quot;.
The depicted memory model exists since the early 70s and was introduced with the &quot;Real&quot; processor mode. Safety was introduced with the protected mode .
The dimensions of the segments are not realistic. The stack is rather small and the heap is rather big.
In assembly languages it is possible to use direct segmentation to define which data resides in the data and in the code segment.
When a program is loaded the start and end address of the heap are passed to the heap manager.
What is the code segment (text segment)?
In this portion of the memory the object code or assembly code resides.
The C/C++ types for which memory is allocated don&apos;t matter to the heap manger; it rather manages requests of portions of bytes.
The function std::realloc() may or may not extend the portion of memory (in &quot;higher address direction&quot;) that is designated by the passed pointer. This means that the address returned by std::realloc() needs not to be the same as the one passed.
If a memory block is reallocated and can not be extended, because there is not enough adjacent space, another matching memory block will be allocated.
Why does the explained bus error only appear in 50% of the cases?
By a chance of 50% the dereferenced address is odd.
Typically ints reside on addresses being a multiple of four.
Typically there is no address restriction for bytes/chars.
After a crash we may be forced to remove the crashed process or to reboot the system.
C/C++ is a programming language for programmers: there is no guarantee on misusing features. - There is no elaborate exception strategy like in .Net or Java. In C++ we can use/consume exceptions, but they are only present in STL APIs, for other APIs we have to cope with undefined behavior.
The teddybear principle: http://talkaboutquality.wordpress.com/2010/08/30/tell-it-to-your-teddy-bear/
This is not exactly what happens, but close enough to understand it basically.
The heap fragmentation evolves like a parking bay w/o marks: Cars of different dimensions enter and leave the bay. After a while, spaces occupied by wide cars will be reused by small cars. The surplus space is wasted leaving a fragmented parking bay.
After a has been freed, the formerly occupied memory could be reused by next allocations.
Not all implementations of the heap manager start finding free memory at the beginning of the heap.
The heap manager can record the available gaps of free memory as a linked list to their pointers. The pointers are called &quot;free nodes&quot; and the linked list is called &quot;free list&quot;.
A concrete example for a segmented heap is the &quot;Low Fragmentation Heap&quot; (LFH), which can be used in Windows Vista and newer Windows versions. It stores memory blocks of different sizes in dedicated buckets to optimize search time for free memory and to lower the heap fragmentation of course.