The document is a presentation on the make tool. It discusses that make is used to control the generation of executable and non-source files from multiple source files. It describes how make works using a Makefile that defines rules and dependencies between targets and prerequisites. The presentation covers writing explicit and implicit rules, using variables, automatic variables, and make directives in the Makefile. It also summarizes common make command line options.
Make is a tool that automates the building of software by tracking dependencies between files and only rebuilding components that have changed. It reads build instructions from a makefile to determine what needs to be built. Make traverses the dependency tree of a project, rebuilds out of date or missing components, and handles dependencies between files and components. While powerful, makefiles can be difficult to write and debug, and Make has limitations for languages like Java that don't expose dependencies in source code. Alternatives like Apache Ant provide similar functionality through XML build files.
The document provides an overview of the Spring framework and instructions for setting up a simple Spring example project in Eclipse. It introduces the core concepts of Spring including dependency injection and inversion of control. It then demonstrates a simple example where two shape classes (Rectangle and Circle) implement a Shape interface and are configured as Spring beans in an XML file. A driver class loads the configuration file and retrieves the shape objects by name to loosely couple the code from the specific shape implementations.
An Introduction to Makefile.
about 23 slides to present you a quick start to the make utility, its usage and working principles. Some tips/examples in order to understand and write your own
Makefiles.
In this presentation you will learn why this utility continues to hold its top position in project build software, despite many younger competitors.
Visit Do you know Magazine : https://www.facebook.com/douknowmagazine
Terraform is an open source infrastructure as code tool that can be used to build, change, and version infrastructure safely and efficiently. The presentation provided an overview of Terraform, including its architecture, workflow, modules, providers, debugging options, and common gotchas. It also discussed challenges and opportunities for improving Terraform, as well as tools that can be used to automate, test, and visualize infrastructure defined with Terraform.
Terraform is an open source infrastructure as code tool that can be used to build, change, and version infrastructure safely and efficiently. The presentation provided an overview of Terraform, including its architecture, workflow, modules, providers, debugging options, and common gotchas. It also discussed challenges Terraform faces and how the community can help address them through contributing modules, providers and helping improve existing ones.
The document provides an overview of the architecture of the gLite data management system. It discusses the challenges of data management in grids, including heterogeneity, distribution and data description. It defines key concepts like the Storage Element, File Catalogue and Storage Resource Manager interface. It also covers file naming conventions, the different types of Storage Elements, and commands to interact with the file catalogue and manage data replication.
Brief training targeted to middle school aged students who are participating in First Lego League robotics and planning to use a version control tool such as EV3Hub
The document discusses GNU/Linux operating system concepts including the GNU operating system, GNU development tools like GCC and binutils, and the Linux process model. It covers the stages of building a C program from preprocessing to linking, using Makefiles, and basic process management in Linux using commands like ps and top. It also describes process creation using fork, exec, and the system function in C programming.
Make is a tool that automates the building of software by tracking dependencies between files and only rebuilding components that have changed. It reads build instructions from a makefile to determine what needs to be built. Make traverses the dependency tree of a project, rebuilds out of date or missing components, and handles dependencies between files and components. While powerful, makefiles can be difficult to write and debug, and Make has limitations for languages like Java that don't expose dependencies in source code. Alternatives like Apache Ant provide similar functionality through XML build files.
The document provides an overview of the Spring framework and instructions for setting up a simple Spring example project in Eclipse. It introduces the core concepts of Spring including dependency injection and inversion of control. It then demonstrates a simple example where two shape classes (Rectangle and Circle) implement a Shape interface and are configured as Spring beans in an XML file. A driver class loads the configuration file and retrieves the shape objects by name to loosely couple the code from the specific shape implementations.
An Introduction to Makefile.
about 23 slides to present you a quick start to the make utility, its usage and working principles. Some tips/examples in order to understand and write your own
Makefiles.
In this presentation you will learn why this utility continues to hold its top position in project build software, despite many younger competitors.
Visit Do you know Magazine : https://www.facebook.com/douknowmagazine
Terraform is an open source infrastructure as code tool that can be used to build, change, and version infrastructure safely and efficiently. The presentation provided an overview of Terraform, including its architecture, workflow, modules, providers, debugging options, and common gotchas. It also discussed challenges and opportunities for improving Terraform, as well as tools that can be used to automate, test, and visualize infrastructure defined with Terraform.
Terraform is an open source infrastructure as code tool that can be used to build, change, and version infrastructure safely and efficiently. The presentation provided an overview of Terraform, including its architecture, workflow, modules, providers, debugging options, and common gotchas. It also discussed challenges Terraform faces and how the community can help address them through contributing modules, providers and helping improve existing ones.
The document provides an overview of the architecture of the gLite data management system. It discusses the challenges of data management in grids, including heterogeneity, distribution and data description. It defines key concepts like the Storage Element, File Catalogue and Storage Resource Manager interface. It also covers file naming conventions, the different types of Storage Elements, and commands to interact with the file catalogue and manage data replication.
Brief training targeted to middle school aged students who are participating in First Lego League robotics and planning to use a version control tool such as EV3Hub
The document discusses GNU/Linux operating system concepts including the GNU operating system, GNU development tools like GCC and binutils, and the Linux process model. It covers the stages of building a C program from preprocessing to linking, using Makefiles, and basic process management in Linux using commands like ps and top. It also describes process creation using fork, exec, and the system function in C programming.
This document provides an introduction to version control systems. It discusses some of the key challenges in sharing files among software developers working on the same project. Early approaches involved locking files to prevent simultaneous editing, but this limited parallel work. Modern version control systems like SVN and Git allow concurrent editing by having developers make local changes and later merging those changes. Distributed version control systems store the full revision history locally, allowing work without a central server. The document outlines some common version control processes and terminology.
The document provides an introduction to Java and XML. It outlines the course objectives which are to introduce Java architecture, syntax, object-oriented concepts, exception handling, and packaging Java applications. It also aims to introduce XML and XML parsing. The session plan for day 1 includes a review of object-oriented concepts and an introduction to the Java architecture, basic constructs in Java, classes, objects, and features of object-oriented programming in Java.
This document provides an overview and introduction to Linux basics including:
- Linux origins tracing back to Unix and key contributors like Linus Torvalds and Richard Stallman.
- Linux architecture with the kernel at the core and layers including shell, libraries, and applications.
- Linux file system structure with important directories like /, /bin, /etc, and file types.
- Common Linux commands for file management, permissions, users, and processes.
- File system concepts like permissions denoted by rwx and file/folder management commands.
The document discusses improving the cold startup performance of OpenOffice.org 3.2. It summarizes the improvements made in OO3.2 including reducing the number of libraries loaded at startup by merging libraries. It then discusses the PE file format and how merging two libraries reduces the number of files opened and amount of data read from disk. Test results showed merging 65 libraries into 10 in OO1.1 reduced the number of libraries loaded at startup from 68 to 24, improving the cold startup time.
This document discusses how to write shared libraries. It begins with a brief history of shared libraries, noting that they allow code to be reused across processes by loading it into memory once. It then discusses some of the challenges with early binary formats not being designed for shared libraries, and how Linux initially used a.out but later switched to ELF to address limitations. The document will cover rules for properly using shared libraries to optimize resource usage and structure programs.
The document provides an overview of Hadoop distributed computing. It discusses how Hadoop uses MapReduce and HDFS to efficiently process large amounts of data across clusters of commodity servers. Key features of Hadoop include scaling to large datasets, handling failures automatically, and providing a simple programming model through MapReduce.
The document provides an overview of the CSCE 510 - Systems Programming course, including a brief history of systems programming and Unix, the course content which involves programming assignments in C like ls and shell programming, and references for further reading. It discusses the kernel and its tasks like process scheduling. It also summarizes file types, pathnames, the directory hierarchy, and basic Unix commands.
This document summarizes a presentation on helping Windows administrators survive using OS X. It discusses key differences between OS X and Windows like the file structure, permissions, preferences, security features like Gatekeeper and FileVault, and the launchd process management system. It provides examples of commands, shortcuts, and navigating the OS X interface. The presentation concludes with a discussion on differences in managing Macs with KACE and questions about Netbooting and software distribution.
Robocopy is a command line tool used for file replication and maintaining identical copies of directory structures. It can copy a single directory or recursively copy subdirectories. Robocopy classifies files as existing in the source, destination, or both locations, and further classifies files based on comparing timestamps and sizes. It allows specifying options to include, exclude, delete, and selectively copy files and directories between source and destination locations.
This document provides an overview of why GNU/Linux is useful, where it is used, the different distributions, basics of the operating system like shell, directory structure, logging in, and commands. Some key benefits of GNU/Linux mentioned are that software is free, it enables advanced multitasking and networking, is multiuser, and provides access to programming languages and open source projects. Common distributions include Red Hat Linux, Debian, and SUSE. The document then covers basics like shell, directory structure, logging in, and demonstrates many common commands like ls, cat, cp, rm, mv, and their usage.
The document provides an overview of Arnaud Bouchez and his work on mORMot and SynPDF. It discusses mORMot version 1.18 and its features like being an ORM, supporting SOA, MVC, and REST. It then summarizes the results of a survey conducted on refactoring mORMot, including separating it into smaller units, using semantic versioning, dropping old compiler support, and moving to GitHub. It previews the structure and goals of the new mORMot 2 library.
Embedded Systems: Lecture 13: Introduction to GNU Toolchain (Build Tools)Ahmed El-Arabawy
The document discusses Linux toolchains used for embedded systems development. It describes the main components of the GNU toolchain including gcc (compiler), ld (linker), ar (library archiver) and other tools. It explains the compilation process from source code to executable, use of static and dynamic libraries, and how the dynamic linker locates libraries at runtime. Commands for building, linking and debugging programs are also covered.
101 4.6 create and change hard and symbolic links v2Acácio Oliveira
This document discusses hard and symbolic links in Linux file systems. It defines hard links as additional references to an inode that have the same permissions and access times as the original file. Symbolic links maintain separate permissions and if the original file is deleted, the link is broken. The document provides examples of creating hard and symbolic links using the ln command and explains how to identify them when listing a directory. It also covers that hard links must be on the same file system while symbolic links can span file systems.
Build an application upon Semantic Web models. Brief overview of Apache Jena and OWL-API.
Semantic Web course
e-Lite group (https://elite.polito.it)
Politecnico di Torino, 2017
Introduction to linux at Introductory Bioinformatics WorkshopSetor Amuzu
This is a brief introduction to Linux, with emphasis on command-line interface. This presentation was made to participants of the H3ABioNet Introductory Bioinformatics workshop held in Accra, Ghana on 26 March, 2014.
4.6 create and change hard and symbolic links v2Acácio Oliveira
This document discusses hard and symbolic links in Linux file systems. It defines hard links as additional references to an inode that maintain the same permissions and access times as the original file. Symbolic links store the file path separately and can span file systems. The document provides instructions for creating hard and symbolic links using the ln command and explains how to identify them by listing directory contents. It also describes how multiple links can reference a single file and what happens when files or links are deleted.
The most hated thing a developer can imagine is writing documentation but on the other hand nothing can compare with a well documented source code if you want to change or extend some code. PhpDocumentor is one of many tools enabling you to parse the inline documentation and generate well structured and referenced documents. This tallk will show you how to get the most out of phpDocumentor and shall enable you to write fantastic documentation.
This document provides information about managing shared libraries in Linux. It discusses:
- Shared libraries allow common code to be reused across applications to reduce duplication. Applications may link dynamically or statically to libraries.
- Linux systems store shared libraries in paths like /lib and /usr/lib. Libraries have a naming convention like libname-version.so and libname.so symlinks.
- The ldd command shows which libraries an application requires. Libraries can also depend on other libraries.
- The ldconfig command processes the /etc/ld.so.conf file to create the ld.so.cache file, which records library locations.
- The LD_LIBRARY_PATH variable can set non-standard library paths
This document discusses C language files input/output (I/O), the preprocessor, and conditional compilation. It covers:
- Types of files for I/O: text and binary files. Text files store plain text while binary files store data in 0s and 1s.
- File operations in C: creating, opening, closing files and reading/writing data. Functions like fopen(), fclose(), fprintf(), fscanf(), fread(), fwrite() are used.
- The preprocessor allows inclusion of header files and definition of macros to transform code before compilation. Directives like #include, #define are used.
- Conditional compilation allows certain code blocks to be included or excluded
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
This document provides an introduction to version control systems. It discusses some of the key challenges in sharing files among software developers working on the same project. Early approaches involved locking files to prevent simultaneous editing, but this limited parallel work. Modern version control systems like SVN and Git allow concurrent editing by having developers make local changes and later merging those changes. Distributed version control systems store the full revision history locally, allowing work without a central server. The document outlines some common version control processes and terminology.
The document provides an introduction to Java and XML. It outlines the course objectives which are to introduce Java architecture, syntax, object-oriented concepts, exception handling, and packaging Java applications. It also aims to introduce XML and XML parsing. The session plan for day 1 includes a review of object-oriented concepts and an introduction to the Java architecture, basic constructs in Java, classes, objects, and features of object-oriented programming in Java.
This document provides an overview and introduction to Linux basics including:
- Linux origins tracing back to Unix and key contributors like Linus Torvalds and Richard Stallman.
- Linux architecture with the kernel at the core and layers including shell, libraries, and applications.
- Linux file system structure with important directories like /, /bin, /etc, and file types.
- Common Linux commands for file management, permissions, users, and processes.
- File system concepts like permissions denoted by rwx and file/folder management commands.
The document discusses improving the cold startup performance of OpenOffice.org 3.2. It summarizes the improvements made in OO3.2 including reducing the number of libraries loaded at startup by merging libraries. It then discusses the PE file format and how merging two libraries reduces the number of files opened and amount of data read from disk. Test results showed merging 65 libraries into 10 in OO1.1 reduced the number of libraries loaded at startup from 68 to 24, improving the cold startup time.
This document discusses how to write shared libraries. It begins with a brief history of shared libraries, noting that they allow code to be reused across processes by loading it into memory once. It then discusses some of the challenges with early binary formats not being designed for shared libraries, and how Linux initially used a.out but later switched to ELF to address limitations. The document will cover rules for properly using shared libraries to optimize resource usage and structure programs.
The document provides an overview of Hadoop distributed computing. It discusses how Hadoop uses MapReduce and HDFS to efficiently process large amounts of data across clusters of commodity servers. Key features of Hadoop include scaling to large datasets, handling failures automatically, and providing a simple programming model through MapReduce.
The document provides an overview of the CSCE 510 - Systems Programming course, including a brief history of systems programming and Unix, the course content which involves programming assignments in C like ls and shell programming, and references for further reading. It discusses the kernel and its tasks like process scheduling. It also summarizes file types, pathnames, the directory hierarchy, and basic Unix commands.
This document summarizes a presentation on helping Windows administrators survive using OS X. It discusses key differences between OS X and Windows like the file structure, permissions, preferences, security features like Gatekeeper and FileVault, and the launchd process management system. It provides examples of commands, shortcuts, and navigating the OS X interface. The presentation concludes with a discussion on differences in managing Macs with KACE and questions about Netbooting and software distribution.
Robocopy is a command line tool used for file replication and maintaining identical copies of directory structures. It can copy a single directory or recursively copy subdirectories. Robocopy classifies files as existing in the source, destination, or both locations, and further classifies files based on comparing timestamps and sizes. It allows specifying options to include, exclude, delete, and selectively copy files and directories between source and destination locations.
This document provides an overview of why GNU/Linux is useful, where it is used, the different distributions, basics of the operating system like shell, directory structure, logging in, and commands. Some key benefits of GNU/Linux mentioned are that software is free, it enables advanced multitasking and networking, is multiuser, and provides access to programming languages and open source projects. Common distributions include Red Hat Linux, Debian, and SUSE. The document then covers basics like shell, directory structure, logging in, and demonstrates many common commands like ls, cat, cp, rm, mv, and their usage.
The document provides an overview of Arnaud Bouchez and his work on mORMot and SynPDF. It discusses mORMot version 1.18 and its features like being an ORM, supporting SOA, MVC, and REST. It then summarizes the results of a survey conducted on refactoring mORMot, including separating it into smaller units, using semantic versioning, dropping old compiler support, and moving to GitHub. It previews the structure and goals of the new mORMot 2 library.
Embedded Systems: Lecture 13: Introduction to GNU Toolchain (Build Tools)Ahmed El-Arabawy
The document discusses Linux toolchains used for embedded systems development. It describes the main components of the GNU toolchain including gcc (compiler), ld (linker), ar (library archiver) and other tools. It explains the compilation process from source code to executable, use of static and dynamic libraries, and how the dynamic linker locates libraries at runtime. Commands for building, linking and debugging programs are also covered.
101 4.6 create and change hard and symbolic links v2Acácio Oliveira
This document discusses hard and symbolic links in Linux file systems. It defines hard links as additional references to an inode that have the same permissions and access times as the original file. Symbolic links maintain separate permissions and if the original file is deleted, the link is broken. The document provides examples of creating hard and symbolic links using the ln command and explains how to identify them when listing a directory. It also covers that hard links must be on the same file system while symbolic links can span file systems.
Build an application upon Semantic Web models. Brief overview of Apache Jena and OWL-API.
Semantic Web course
e-Lite group (https://elite.polito.it)
Politecnico di Torino, 2017
Introduction to linux at Introductory Bioinformatics WorkshopSetor Amuzu
This is a brief introduction to Linux, with emphasis on command-line interface. This presentation was made to participants of the H3ABioNet Introductory Bioinformatics workshop held in Accra, Ghana on 26 March, 2014.
4.6 create and change hard and symbolic links v2Acácio Oliveira
This document discusses hard and symbolic links in Linux file systems. It defines hard links as additional references to an inode that maintain the same permissions and access times as the original file. Symbolic links store the file path separately and can span file systems. The document provides instructions for creating hard and symbolic links using the ln command and explains how to identify them by listing directory contents. It also describes how multiple links can reference a single file and what happens when files or links are deleted.
The most hated thing a developer can imagine is writing documentation but on the other hand nothing can compare with a well documented source code if you want to change or extend some code. PhpDocumentor is one of many tools enabling you to parse the inline documentation and generate well structured and referenced documents. This tallk will show you how to get the most out of phpDocumentor and shall enable you to write fantastic documentation.
This document provides information about managing shared libraries in Linux. It discusses:
- Shared libraries allow common code to be reused across applications to reduce duplication. Applications may link dynamically or statically to libraries.
- Linux systems store shared libraries in paths like /lib and /usr/lib. Libraries have a naming convention like libname-version.so and libname.so symlinks.
- The ldd command shows which libraries an application requires. Libraries can also depend on other libraries.
- The ldconfig command processes the /etc/ld.so.conf file to create the ld.so.cache file, which records library locations.
- The LD_LIBRARY_PATH variable can set non-standard library paths
This document discusses C language files input/output (I/O), the preprocessor, and conditional compilation. It covers:
- Types of files for I/O: text and binary files. Text files store plain text while binary files store data in 0s and 1s.
- File operations in C: creating, opening, closing files and reading/writing data. Functions like fopen(), fclose(), fprintf(), fscanf(), fread(), fwrite() are used.
- The preprocessor allows inclusion of header files and definition of macros to transform code before compilation. Directives like #include, #define are used.
- Conditional compilation allows certain code blocks to be included or excluded
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
1. Make Tutorial
Le Yan
User Services
High Performance Computing @ LSU/LONI
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
2. Outline
• What is make
• How to use make
– How to write a makefile
– How to use the “make” command
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
3. What is Make
• A tool that
– Controls the generation of executable and other non-source files (libraries
etc.)
– Simplifies (a lot) the management of a program that has multiple source files
• Have many variants
– GNU make (we will focus on it today)
– BSD make
– …
• Other utilities that do similar things
– Cmake
– Zmake
– …
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
4. What is Make
• A tool that
– Controls the generation of executable and other non-source files (libraries
etc.)
– Simplifies (a lot) the management of a program that has multiple source files
• Have many variants
– GNU make (we will focus on it today)
– BSD make
– …
• Other utilities that do similar things
– Cmake
– Zmake
– …
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
5. Why having multiple source files
• It is very important to keep different modules
of functionalities in different source files,
especially for a large program
– Easier to edit and understand
– Easier version control
– Easier to share code with others
– Allow to write a program with different languages
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
6. From source files to executable
• Two-step process
– The compiler generates the object files from the
source files
– The linker generates the executable from the
object files
• Most compilers do both steps by default
– Use “-c” to suppress linking
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
7. Compiling multiple source files
• Compiling single source file is straightforward
– <compiler> <flags> <source file>
• Compiling multiple source files
– Need to analyze file dependencies to decide the
order of compilation
– Can be done with one command as well
• <compiler> <flags> <source file 1> <source file 2>…
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
8. A “Hello world” example (1)
2/14/2012
LONI Fortran Programming Workshop, LSU Feb
13-16, 2012
Source file Purpose
Common.f90 Declares a character variable to store the message
Hello.f90 Prints the message to screen
Adjust.f90 Modifies the message and prints it to screen
Main.f90 Calls functions in hello.f90 and adjust.f90
main.f90 adjust.f90 hello.f90
Common.mod
Common.f90
main.o adjust.o hello.o
a.out
10. Command line compilation
• Command line compilation works, but it is
– Cumbersome
• Does not work very well when one has a source tree with
many source files in many sub-directories
– Not flexible
• What if different source files need to be compiled using
different flags?
• Use Make instead!
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
11. How Make works
• Two parts
– The Makefile
• A text file that describes the dependency
– The “make” command
• Compile the program using the dependency provided
by the Makefile
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
[lyan1@eric2 make]$ ls
adjust.f90 common.f90 hello.f90 main.f90
Makefile
[lyan1@eric2 make]$ make
ifort common.f90 hello.f90 adjust.f90 main.f90
[lyan1@eric2 make]$ ls
adjust.f90 a.out common.f90 common.mod hello.f90
main.f90 Makefile
12. A Makefile with only one rule
2/14/2012
LONI Fortran Programming Workshop, LSU Feb
13-16, 2012
[lyan1@eric2 make]$ cat Makefile
all:
ifort common.f90 hello.f90 adjust.f90 main.f90
Target Action: shell commands that will be executed
Explicit rule
A mandatory tab
13. Exercise 1
• Copy all files under
/home/lyan1/traininglab/make to your own
user space
• Check the Makefile and use it to build the
executable
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
14. Makefile components
• Explicit rules
– Purpose: create a target or re-create a target when
any of prerequisites changes
– Syntax:
• Implicit rules
• Variable definition
• Directives
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
target: prerequisites
(tab) action
15. Explicit rules (1)
• Multiple rules can exist in the same Makefile
– The “make” command builds the first target by default
– To build other targets, one needs to specify the target name
• make <target name>
• A single rule can have multiple targets separated by space
• An action (or recipe) can consist of multiple commands
– They can be on multiple lines, or on the same line separated by
semicolons
– Wildcards can be used
– By default all executed commands will be printed to screen
• Can be suppressed by adding “@” before the commands
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
16. Explicit rules (2)
• How file dependencies are handled
– Targets and prerequisites are often file names
– A target is considered out-of-date if
• It does not exist, or
• It is older than any of the prerequisites
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
18. Exercise 2
• Write a Makefile using the template provided on
the previous slide and “make”
• Run “make” again and see what happens
• Modify the message (common.f90) and “make”
again
• Add a new rule “clean” which deletes all but the
source and makefiles (the executable, object files
and common.mod), and try “make clean”
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
19. Variables in Makefile (1)
• These kinds of
duplication are
error-prone
• One can solve
this problem by
using variables
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
all: main.o adjust.o hello.o
ifort main.o adjust.o hello.o
main.o: main.f90
ifort –c main.f90
adjust.o: adjust.f90 common.mod
ifort –c adjust.f90
hello.o: hello.f90 common.mod
ifort –c hello.f90
common.mod: common.f90
ifort –c common.f90
20. Variables in Makefile (2)
• Similar to shell variables
– Define once as a string and reuse later
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
all: main.o adjust.o hello.o
ifort main.o adjust.o hello.o
main.o: main.f90
ifort –c main.f90
FC=ifort
OBJ=main.o adjust.o hello.o
all: $(OBJ)
$(FC) $(OBJ)
main.o: main.f90
$(FC) –c main.f90
Without variables
With variables
21. Automatic variables
• The values of automatic variables change every time a
rule is executed
• Automatic variables only have values within a rule
• Most frequently used ones
– $@: The name of the current target
– $^: The names of all the prerequisites
– $?: The names of all the prerequisites that are newer than
the target
– $<: The name of the first prerequisite
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
22. Implicit rules (1)
• Tells Make system how to build a certain type of targets
– GNU make has a few built-in implicit rules
• Syntax is similar to an ordinary rule, except that “%” is used
in the target
– “%” stands for the same thing in the prerequisites as it does in
the target
– There can also be unvarying prerequisites
– Automatic variables can be used here as well
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
%.o: %.c
(tab) action
23. Implicit rules (2)
• In this example, any .o target has a corresponding .c file
as an implied prerequisite
• If a target needs additional prerequisites, write a action-
less rule with those prerequisites
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
CC=icc
CFLAGS=-O3
%.o : %.c
@$(CC) $(CFLAGS) -c –o $@ $<
data.o: data.h
24. Exercise 3
• Rewrite the Makefile from Exercise 2
– Define an implicit rule so that no more than 3
explicit rules are necessary (excluding “clean”)
– Use variables so that no file name appears in the
action section of any rule
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
25. Directives
• Make directives are similar to the C preprocessor
directives
– E.g. include, define, conditionals
• Include directive
– Read the contents of other Makefiles before
proceeding within the current one
– Often used to read
• Top level and common definitions when there are multiple
sub-directories and makefiles
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
26. Command line options of make (1)
• -f <file name>
– Specify the name of the file to be used as the makefile
– Default is GNUmakefile, makefile and Makefile (in that
order)
– Multiple makefiles may be useful for compilation on
multiple platforms
• -s
– Turn on silent mode (as if all commands start with an “@”)
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
27. Command line options of make (2)
• -j <number of jobs>
– Build multiple targets in parallel
• -i
– Ignore all errors
– A warning message will be printed out for each error
• -k
– Continue as much as possible after an error.
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012
28. Exercise 4
• Take a look at a real life makefile
– /home/lyan1/traininglab/valgrind/Makefile
– Makefile for a memory profiler Valgrind
2/14/2012
LONI Fortran Programming Workshop, LSU
Feb 13-16, 2012