Slides of the paper Standoff Annotation for the Ancient Greek and Latin Dependency Treebank by Giuseppe Celano at the 3rd Edition of the DATeCH2019 International Conference
This document describes the embedding of NomLex-BR, a dictionary of Portuguese nominalizations, into OpenWordNet-PT. NomLex-BR relates nominal terms to their corresponding verbs. It contains over 2,539 entries from various sources. The integration aims to facilitate linguistic research and information extraction by connecting deverbal nouns to their verbs. Some issues in OpenWordNet-PT were also identified in the process, such as linking the noun "aviltamento" to the correct verb "aviltar". Future work includes further improvements to coverage and applications to test the resource.
Full version of http://www.slideshare.net/valexiev1/gvp-lodcidocshort. Same is available on http://vladimiralexiev.github.io/pres/20140905-CIDOC-GVP/index.html
CIDOC Congress, Dresden, Germany
2014-09-05: International Terminology Working Group: full version.
2014-09-09: Getty special session: short version
This document provides an update on the LATEX3 project in March 2014. It discusses that work has slowed on the LATEX3 codebase as team members have been busy with other tasks. However, some improvements have continued to be made, and the expl3 language is gaining wider use in packages created by external developers to solve typesetting problems. Areas of ongoing work and discussion within the project include uppercase and lowercase text manipulation and optional argument parsing.
This document discusses advanced Perl concepts including finer points of looping, using pack and unpack, working with files and directories, eval, data structures, packages, modules, objects, and interfacing with the operating system. It provides examples and explanations of continue blocks, multiple loop variables, subroutine prototypes, determining calling context, packing and unpacking data, opening, reading and writing files, getting file information, working with directories, using eval, defining arrays of arrays, packages, modules, BEGIN and END blocks, and the basics of defining objects and classes in Perl.
This document summarizes the fourth release of LaTeX2ε. It highlights improvements that make LaTeX installation faster and smaller, including a new concurrent version of docstrip that can write multiple files simultaneously. The release also features updated T1 encoded Computer Modern fonts that fix issues and improve quality. Additional changes include more robust commands and a new interface for building document classes.
Portable TeX Documents (PTD): PackagingCon 2021Jonathan Fine
Both software and documents have dependencies. This talk focuses on managing document dependencies, to reduce both network and computation latency, and to ensure reproducible build (or typesetting) behaviour. Web development has a strong focus on reducing user experienced latency, as does serverless cloud computing.
This document summarizes the MIT/GNU Scheme reference manual. It describes Scheme as a programming language with static scoping, latent types, and objects with unlimited extent. Procedures are first-class objects that can be passed as arguments and returned as values. Scheme uses prefix notation and parentheses to denote programs and data. The manual outlines notational conventions used for examples and entries describing variables, special forms, and procedures.
This document describes the embedding of NomLex-BR, a dictionary of Portuguese nominalizations, into OpenWordNet-PT. NomLex-BR relates nominal terms to their corresponding verbs. It contains over 2,539 entries from various sources. The integration aims to facilitate linguistic research and information extraction by connecting deverbal nouns to their verbs. Some issues in OpenWordNet-PT were also identified in the process, such as linking the noun "aviltamento" to the correct verb "aviltar". Future work includes further improvements to coverage and applications to test the resource.
Full version of http://www.slideshare.net/valexiev1/gvp-lodcidocshort. Same is available on http://vladimiralexiev.github.io/pres/20140905-CIDOC-GVP/index.html
CIDOC Congress, Dresden, Germany
2014-09-05: International Terminology Working Group: full version.
2014-09-09: Getty special session: short version
This document provides an update on the LATEX3 project in March 2014. It discusses that work has slowed on the LATEX3 codebase as team members have been busy with other tasks. However, some improvements have continued to be made, and the expl3 language is gaining wider use in packages created by external developers to solve typesetting problems. Areas of ongoing work and discussion within the project include uppercase and lowercase text manipulation and optional argument parsing.
This document discusses advanced Perl concepts including finer points of looping, using pack and unpack, working with files and directories, eval, data structures, packages, modules, objects, and interfacing with the operating system. It provides examples and explanations of continue blocks, multiple loop variables, subroutine prototypes, determining calling context, packing and unpacking data, opening, reading and writing files, getting file information, working with directories, using eval, defining arrays of arrays, packages, modules, BEGIN and END blocks, and the basics of defining objects and classes in Perl.
This document summarizes the fourth release of LaTeX2ε. It highlights improvements that make LaTeX installation faster and smaller, including a new concurrent version of docstrip that can write multiple files simultaneously. The release also features updated T1 encoded Computer Modern fonts that fix issues and improve quality. Additional changes include more robust commands and a new interface for building document classes.
Portable TeX Documents (PTD): PackagingCon 2021Jonathan Fine
Both software and documents have dependencies. This talk focuses on managing document dependencies, to reduce both network and computation latency, and to ensure reproducible build (or typesetting) behaviour. Web development has a strong focus on reducing user experienced latency, as does serverless cloud computing.
This document summarizes the MIT/GNU Scheme reference manual. It describes Scheme as a programming language with static scoping, latent types, and objects with unlimited extent. Procedures are first-class objects that can be passed as arguments and returned as values. Scheme uses prefix notation and parentheses to denote programs and data. The manual outlines notational conventions used for examples and entries describing variables, special forms, and procedures.
This document compares the parallel programming support in Haskell, F#, and Scala by looking at their language features, high-level abstractions for parallelism, and experimental results from implementing the n-body problem on multi-core systems. It finds that all three languages provide good support for parallelism through features like parallel collections, tasks, actors, and strategies/skeletons. Haskell uses the par and pseq primitives as well as evaluation strategies. F# utilizes tasks from the Task Parallel Library and async workflows. Scala supports parallel collections and actors. Experimental results on implementing the n-body problem in the languages show they can all effectively utilize multiple cores.
CIDOC Congress, Dresden, Germany
2014-09-05: International Terminology Working Group: full version (http://vladimiralexiev.github.io/pres/20140905-CIDOC-GVP/index.html)
2014-09-09: Getty special session: short version (http://VladimirAlexiev.github.io/pres/20140905-CIDOC-GVP/GVP-LOD-CIDOC-short.pdf)
This document provides an introduction to LaTeX, covering its history and origins, installation, basic document structure, fonts and formatting, tables, figures, lists, references, mathematics typesetting, code listings, bibliography features, TikZ graphics, and useful LaTeX resources. It describes LaTeX as a document markup language released in 1984 as an abbreviation of Lamport TeX, based on the TeX typesetting system developed by Donald Knuth in 1978.
This document provides a sample LaTeX document that conforms to formatting guidelines for ACM SIG proceedings. It includes examples of common elements like sections, equations, tables, figures, citations, and various LaTeX commands. The goal is to demonstrate all possible "bells and whistles" to help authors prepare documents for ACM conferences.
This document provides an overview of Unix shell scripting with ksh/bash. It discusses the goals of the class, which are to learn what problems are suited to shell scripts, review commonly used Unix commands for scripts, and write simple shell scripts. It also lists some assumptions, such as having a basic understanding of commands, navigation, redirection and pipes. The document then provides details on the history of different shells like sh, csh, ksh and bash, and compares their features. It also discusses other scripting languages before focusing on ksh/bash versus sh scripts.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
Presentation held in the seminar on "Development Processes in Open Source Projects." Features the documentation tool Sphinx and its internationalization component sphinx-i18n, along with general insights to Open Source communities and technical details about gettext, Docutils, ReStructuredText, and Google's Summer of Code. Also fixed lotsa bugs in sphinx-i18n. :-)
The document discusses the Delphi Runtime Library (RTL). It provides three key points:
1. The RTL is a collection of functions and procedures built into Delphi that are organized into units like SysUtils, Classes, and FileCtrl.
2. The SysUtils unit is well documented and contains many commonly used routines.
3. The RTL aims to be platform independent through conditional compilation and provides object wrappers for many routines through units like Contnrs.
The document provides an introduction to Bash shell programming in Linux. It covers basic shell commands like pwd, ls, cat, grep, and redirection operators like > and |. It explains how to write shell scripts, set permissions, and include tests and branching. Examples are provided for listing files, examining file contents, sorting output with pipes, and writing a simple "Hello world" shell script. The document is intended as a basic overview of shell programming concepts.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
This document is a tutorial that introduces Linux shell scripting. It covers topics such as the Linux kernel, shells, processes, redirection, variables, conditional statements, loops, functions, and examples of shell scripts. The tutorial is designed for beginners and explains shell programming concepts through examples to make the ideas clear. It also lists common Linux commands for beginners to become familiar with using the shell.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
The document describes the algorithm2e LaTeX package for writing algorithms. It provides environments like algorithm and procedure for defining algorithms, and macros for typesetting different parts of algorithms. Some key features include predefined language keywords, options for customizing appearances, and abilities to number lines or add side comments. The package aims to make algorithm writing in LaTeX easy and customizable.
This document discusses how to write shared libraries. It begins with a brief history of shared libraries for Linux, noting the limitations of the original a.out format. It then summarizes some of the drawbacks of the a.out approach for shared libraries, before introducing ELF (Executable and Linkable Format) as an improved standard. The document stresses that while ELF removes many restrictions, there are still rules that must be followed to generate decent code from shared libraries and additional techniques required for optimized code.
Slides of the paper Deep Learning-Based Morphological Taggers and Lemmatizers for Annotating Historical Texts by Helmut Schmid at the 3rd Edition of the DATeCH2019 International Conference
This document discusses using text models to improve the accuracy of optical character recognition (OCR) on Chinese rare books. It conducted experiments using n-gram, backward/forward n-gram, and LSTM models on OCR data from ancient medicine books. The backward and forward 4-gram model achieved the highest correction rate at 97.57%. Mixing the LSTM 6-gram model with the OCR's top 5 candidates and probability of the top candidate further improved accuracy to 97.71%, demonstrating that combining text models with OCR probabilities can better correct OCR errors than text models alone. In conclusion, text models are effective for increasing OCR accuracy on rare books, with backward/forward 4-gram and LSTM 6-gram
This document compares the parallel programming support in Haskell, F#, and Scala by looking at their language features, high-level abstractions for parallelism, and experimental results from implementing the n-body problem on multi-core systems. It finds that all three languages provide good support for parallelism through features like parallel collections, tasks, actors, and strategies/skeletons. Haskell uses the par and pseq primitives as well as evaluation strategies. F# utilizes tasks from the Task Parallel Library and async workflows. Scala supports parallel collections and actors. Experimental results on implementing the n-body problem in the languages show they can all effectively utilize multiple cores.
CIDOC Congress, Dresden, Germany
2014-09-05: International Terminology Working Group: full version (http://vladimiralexiev.github.io/pres/20140905-CIDOC-GVP/index.html)
2014-09-09: Getty special session: short version (http://VladimirAlexiev.github.io/pres/20140905-CIDOC-GVP/GVP-LOD-CIDOC-short.pdf)
This document provides an introduction to LaTeX, covering its history and origins, installation, basic document structure, fonts and formatting, tables, figures, lists, references, mathematics typesetting, code listings, bibliography features, TikZ graphics, and useful LaTeX resources. It describes LaTeX as a document markup language released in 1984 as an abbreviation of Lamport TeX, based on the TeX typesetting system developed by Donald Knuth in 1978.
This document provides a sample LaTeX document that conforms to formatting guidelines for ACM SIG proceedings. It includes examples of common elements like sections, equations, tables, figures, citations, and various LaTeX commands. The goal is to demonstrate all possible "bells and whistles" to help authors prepare documents for ACM conferences.
This document provides an overview of Unix shell scripting with ksh/bash. It discusses the goals of the class, which are to learn what problems are suited to shell scripts, review commonly used Unix commands for scripts, and write simple shell scripts. It also lists some assumptions, such as having a basic understanding of commands, navigation, redirection and pipes. The document then provides details on the history of different shells like sh, csh, ksh and bash, and compares their features. It also discusses other scripting languages before focusing on ksh/bash versus sh scripts.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
This document provides instructions on how to write and compile LaTeX documents. It discusses the tools needed like pdflatex and how to use them to compile a .tex file into a .pdf file. It also covers how to structure the main document and chapters as separate files and include them. The document concludes with an overview of LaTeX and pdfLaTeX capabilities such as references, fonts, mathematics, and more.
Presentation held in the seminar on "Development Processes in Open Source Projects." Features the documentation tool Sphinx and its internationalization component sphinx-i18n, along with general insights to Open Source communities and technical details about gettext, Docutils, ReStructuredText, and Google's Summer of Code. Also fixed lotsa bugs in sphinx-i18n. :-)
The document discusses the Delphi Runtime Library (RTL). It provides three key points:
1. The RTL is a collection of functions and procedures built into Delphi that are organized into units like SysUtils, Classes, and FileCtrl.
2. The SysUtils unit is well documented and contains many commonly used routines.
3. The RTL aims to be platform independent through conditional compilation and provides object wrappers for many routines through units like Contnrs.
The document provides an introduction to Bash shell programming in Linux. It covers basic shell commands like pwd, ls, cat, grep, and redirection operators like > and |. It explains how to write shell scripts, set permissions, and include tests and branching. Examples are provided for listing files, examining file contents, sorting output with pipes, and writing a simple "Hello world" shell script. The document is intended as a basic overview of shell programming concepts.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
This document is a tutorial that introduces Linux shell scripting. It covers topics such as the Linux kernel, shells, processes, redirection, variables, conditional statements, loops, functions, and examples of shell scripts. The tutorial is designed for beginners and explains shell programming concepts through examples to make the ideas clear. It also lists common Linux commands for beginners to become familiar with using the shell.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
This document provides an introduction and overview of Linux shell scripting. It begins by explaining key concepts like the kernel, shell, processes, redirection and pipes. It then covers variables, writing and running scripts, quotes, arithmetic, arguments, exit status, wildcards, and basic programming commands like echo, if/test, loops, case. The document concludes with more advanced commands like functions, I/O redirection, traps and examples.
The document describes the algorithm2e LaTeX package for writing algorithms. It provides environments like algorithm and procedure for defining algorithms, and macros for typesetting different parts of algorithms. Some key features include predefined language keywords, options for customizing appearances, and abilities to number lines or add side comments. The package aims to make algorithm writing in LaTeX easy and customizable.
This document discusses how to write shared libraries. It begins with a brief history of shared libraries for Linux, noting the limitations of the original a.out format. It then summarizes some of the drawbacks of the a.out approach for shared libraries, before introducing ELF (Executable and Linkable Format) as an improved standard. The document stresses that while ELF removes many restrictions, there are still rules that must be followed to generate decent code from shared libraries and additional techniques required for optimized code.
Slides of the paper Deep Learning-Based Morphological Taggers and Lemmatizers for Annotating Historical Texts by Helmut Schmid at the 3rd Edition of the DATeCH2019 International Conference
This document discusses using text models to improve the accuracy of optical character recognition (OCR) on Chinese rare books. It conducted experiments using n-gram, backward/forward n-gram, and LSTM models on OCR data from ancient medicine books. The backward and forward 4-gram model achieved the highest correction rate at 97.57%. Mixing the LSTM 6-gram model with the OCR's top 5 candidates and probability of the top candidate further improved accuracy to 97.71%, demonstrating that combining text models with OCR probabilities can better correct OCR errors than text models alone. In conclusion, text models are effective for increasing OCR accuracy on rare books, with backward/forward 4-gram and LSTM 6-gram
Slides of the paper Turning Digitised Material into a Diachronic Corpus: Metadata Challenges in the Nederlab Project by Katrien Depuydt and Hennie Brugman at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Using lexicography to characterise relations between species mentions in the biodiversity literature by Sandra Young at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Implementation of a Databaseless Web REST API for the Unstructured Texts of Migne's Patrologia Graeca with Searching capabilities and additional Semantic and Syntactic expandability by Evagelos Varthis, Marios Poulos, Ilias Yarenis and Sozon Papavlasopoulos at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Curation Technologies for a Cultural Heritage Archive: Analysing and transforming a heterogeneous data set into an interactive curation workbench by Georg Rehm, Martin Lee, Julián Moreno Schneider and Peter Bourgonje at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Cross-disciplinary collaborations to enrich access to non-Western language material in the Cultural Heritage sector by Tom Derrick and Nora McGregor at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Tribunal Archives as Digital Research Facility (TRIADO): new ways to make archives accessible and useable by Anne Gorter, Edwin Klijn, Rutger Van Koert, Marielle Scherer and Ismee Tames at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Improving OCR of historical newspapers and journals published in Finland by Senka Drobac, Pekka Kauppinen and Krister Lindén at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Towards a generic unsupervised method for transcription of encoded manuscripts by Arnau Baró, Jialuo Chen, Alicia Fornés and Beáta Megyesi at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Towards the Extraction of Statistical Information from Digitised Numerical Tables - The Medical Officer of Health Reports Scoping Study by Christian Clausner, Apostolos Antonacopoulos, Christy Henshaw and Justin Hayes at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Detecting Articles in a Digitized Finnish Historical Newspaper Collection 1771–1929: Early Results Using the PIVAJ Software by Kimmo Kettunen, Teemu Ruokolainen, Erno Liukkonen, Pierrick Tranouez, Daniel Antelme and Thierry Paquet at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper OCR-D: An end-to-end open-source OCR framework for historical documents by Clemens Neudecker, Konstantin Baierer, Maria Federbusch, Kay-Michael Würzner, Matthias Boenig, Elisa Hermann and Volker Hartmann at the 3rd Edition of the DATeCH2019 International Conference
- The document describes a project to fill gaps in knowledge about diamond mining, trading, and polishing in Borneo by developing a workflow using various CLARIAH tools and resources.
- The workflow involved digitizing a diamond encyclopedia, extracting concepts and place names, linking the data to external sources to create linked open data, and querying newspaper archives to build a corpus of relevant articles.
- Promising results showed mining, trading, and polishing continued in Borneo for Southeast Asian customers, and described previously unknown diamond fields and polishing locations in Borneo. The project aims to apply the workflow to other commodities like sugar.
Slides of the paper Automatic Reconstruction of Emperor Itineraries from the Regesta Imperii by Juri Opitz, Leo Born, Vivi Nastase and Yannick Pultar at the 3rd Edition of the DATeCH2019 International Conference
Slides of the paper Automatic Semantic Text Tagging on Historical Lexica by Combining OCR and Typography Classification by Christian Reul, Sebastian Göttel, Uwe Springmann, Christoph Wick, Kay-Michael Würzner and Frank Puppe at the 3rd Edition of the DATeCH2019 International Conference
This document describes the SOS system for segmenting, stemming, and standardizing Arabic text. It presents the challenges of processing Arabic cultural heritage texts which contain orthographic variations. The system uses gradient boosting machines and achieves state-of-the-art performance on segmentation and derives stemming as a byproduct. It also standardizes orthography with high accuracy, which further improves segmentation. The system addresses issues like hamza forms and letter confusions that previous systems did not handle well.
Slides of the paper A-I-PoCoTo - Combining Automated and Interactive OCR PostCorrection by Tobias Englmeier, Florian Fink and Klaus U. Schulz at the 3rd Edition of the DATeCH2019 International Conference
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
1. STANDOFF ANNOTATION FOR THE ANCIENT
GREEK AND LATIN DEPENDENCY TREEBANK
DATECH2019, 9.5.2019
Giuseppe G. A. Celano
2. DFG PROJECT
1. Revise: correct the errors
2. Standardize: make the AGLDT standoff as PAULA XML (and convert into
UD)
1. standoff for multiple annotations and/or multiple interpretations of the
same token
2. standoff to overcome the problem of conflicting hierarchies
3. Expand: add new annotations
(https://git.informatik.uni-leipzig.de/celano/agldt1)
3. THE AGLT
▸ Ancient Greek texts: 557,922 tokens
▸ Latin texts: 79,697 tokens
▸ available in GitHub/GitLab:
▸ https://perseusdl.github.io/treebank_data/
▸ https://git.informatik.uni-leipzig.de/celano/agldt1
5. THE PERSEUS TREEBANK (LAST RELEASE, 2.1)
▸ 12 texts
composition date text token number
63 BC Cicero, In Catilinam 6,652
51 BC Caesar, De Bello Gallico 1556
post 44 BC Sallust, Bellum Catilinae 13191
ca 25 BC Prop. Elegiae 5297
29-19 BC Vergil, Aeneid 2839
ca 8 AD Ov., Metamorphoses 5209
14 AD Aug., Res Gestae 3035
15-50 AD Ph., Fabulae 6588
ca 100 AD? Petr., Satyricon 14177
ca 100-110 AD Tac. Historiae 3531
117-138 AD Suet., Vita Divi Augusti 8313
ca 400 AD Ger. Vulgata 9309
10. INLINE ANNOTATION: DISADVANTAGES
1. the tokenized text becomes the new base text
2. after text extraction from a TEI text, links to the original text is virtually lost
(e.g., amabam-que and content of some editorial markup)
3. it is unfeasible to connect such base texts to other annotation layers with
different tokenization schemes. For example:
‣ amabamque: one phonetic word
‣ amabam-que: two syntactic words
‣ am-a-ba-m-que: five morphemes
‣ verse vs. sentence
11. STANDOFF ANNOTATION
1. each annotation layer is attached separately to the original text
(i.e., the base text).
2. an annotation layer references the original text or another
annotaion layer which references the original text
12. STANDOFF ANNOTATION: PAULA XML
1. Open format based on the principles of LAF (ISO 24612:2012)
2. already employed in a number of historical language corpora
3. the base text is a bare xml text, which is virtually referenced only
via offsets
13. THE CASE STUDY: CAESAR’S DE BELLO CIVILI
1. the base text is a ‘complex’ TEI xml file’
‣ reference is made via XPath coinciding with CTS divisions
(https://git.informatik.uni-leipzig.de/celano/latinnlp/tree/master/case-study))
14. TOKENIZATION/WORD SEGMENTATION
▸ Latin: rule-based
▸ select the text to annotate from the TEI XML file
▸ identify abbreviations (word list + regular expressions)
▸ Cn. = Gnaeus
▸ list of not-to-tokenize words (e.g., Antigone, aeque)
▸ tokens ending with ne/que/ve
▸ list of to-tokenize words (e.g., nequis, nobiscum)
18. CURRENT CHALLENGES
▸ extraction of text from TEI texts may require different scripts
▸ what is the ideal tokenization/word segmentation?
▸ annotation tools do not support standoff annotation
▸ lack of support for XPointer