Sphinx is a tool that generates documentation from reStructuredText source files. It can generate HTML, LaTeX, and other formats. Sphinx focuses on hand-written documentation rather than auto-generated API docs. The document provides information on installing Sphinx and converting documentation from other formats like HTML, LaTeX, and Docbook to the Sphinx reStructuredText format.
Fossil is an integrated version control system with a lightweight, self-contained design. It combines version control, ticketing, wiki, and documentation systems into a single repository file managed by SQLite. Changes are immutable and identified by hashes. The system offers both command line and web interfaces and supports distributed version control with push/pull capabilities.
Fossil is a single-EXE, cross-platform, distributed source control management program written by SQLite author Dr. Richard Hipp. Unsurprisingly, data is saved in an SQLite database file, making it very easy to back up.
Linux was created by Linus Torvalds in 1991 and is an open-source operating system that is freely available. It has features like virtual memory, networking capabilities, security protections, and a graphical user interface. Reasons to use Linux include that it is free, runs on various hardware, is stable, and has available source code. Common Linux commands are used to view system information, manage files and directories, and perform other tasks. The root directory contains subdirectories for essential system components, user files, programs, and more.
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementEl Mahdi Benzekri
Initiation to the powerful Elasticsearch Logstash and Kibana stack, it has many use cases, the popular one is the server and application log management.
ELK Stack workshop covers real-world use cases and works with the participants to - implement them. This includes Elastic overview, Logstash configuration, creation of dashboards in Kibana, guidelines and tips on processing custom log formats, designing a system to scale, choosing hardware, and managing the lifecycle of your logs.
RethinkDB is an open-source, scalable JSON database built for real-time applications that allows data to be continuously pushed to clients in real-time instead of polling for changes like traditional databases. It supports installation on Ubuntu, OS X, CentOS and Debian and programmatic access through JavaScript, Ruby, and Python drivers. Queries can be run from the command line or through RethinkDB's own declarative query language called Reql. RethinkDB also supports features like indexing, aggregation, transactions, and multi-master replication across multiple machines.
This document introduces the ELK stack, which consists of Elasticsearch, Logstash, and Kibana. It provides instructions on setting up each component and using them together. Elasticsearch is a search engine that stores and searches data in JSON format. Logstash is an agent that collects logs from various sources, applies filters, and outputs to Elasticsearch. Kibana visualizes and explores the logs stored in Elasticsearch. The document demonstrates setting up each component and running a proof of concept to analyze sample log data.
RethinkDB - the open-source database for the realtime webAlex Ivanov
This document summarizes RethinkDB, an open-source database for building realtime web applications. It describes key features of RethinkDB such as its push model that allows applications to receive live updates from the database via changefeeds without polling, its query language, data modeling options using embedded documents or references, and how it can be used to build collaborative, multiplayer and analytics applications.
Fossil is an integrated version control system with a lightweight, self-contained design. It combines version control, ticketing, wiki, and documentation systems into a single repository file managed by SQLite. Changes are immutable and identified by hashes. The system offers both command line and web interfaces and supports distributed version control with push/pull capabilities.
Fossil is a single-EXE, cross-platform, distributed source control management program written by SQLite author Dr. Richard Hipp. Unsurprisingly, data is saved in an SQLite database file, making it very easy to back up.
Linux was created by Linus Torvalds in 1991 and is an open-source operating system that is freely available. It has features like virtual memory, networking capabilities, security protections, and a graphical user interface. Reasons to use Linux include that it is free, runs on various hardware, is stable, and has available source code. Common Linux commands are used to view system information, manage files and directories, and perform other tasks. The root directory contains subdirectories for essential system components, user files, programs, and more.
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementEl Mahdi Benzekri
Initiation to the powerful Elasticsearch Logstash and Kibana stack, it has many use cases, the popular one is the server and application log management.
ELK Stack workshop covers real-world use cases and works with the participants to - implement them. This includes Elastic overview, Logstash configuration, creation of dashboards in Kibana, guidelines and tips on processing custom log formats, designing a system to scale, choosing hardware, and managing the lifecycle of your logs.
RethinkDB is an open-source, scalable JSON database built for real-time applications that allows data to be continuously pushed to clients in real-time instead of polling for changes like traditional databases. It supports installation on Ubuntu, OS X, CentOS and Debian and programmatic access through JavaScript, Ruby, and Python drivers. Queries can be run from the command line or through RethinkDB's own declarative query language called Reql. RethinkDB also supports features like indexing, aggregation, transactions, and multi-master replication across multiple machines.
This document introduces the ELK stack, which consists of Elasticsearch, Logstash, and Kibana. It provides instructions on setting up each component and using them together. Elasticsearch is a search engine that stores and searches data in JSON format. Logstash is an agent that collects logs from various sources, applies filters, and outputs to Elasticsearch. Kibana visualizes and explores the logs stored in Elasticsearch. The document demonstrates setting up each component and running a proof of concept to analyze sample log data.
RethinkDB - the open-source database for the realtime webAlex Ivanov
This document summarizes RethinkDB, an open-source database for building realtime web applications. It describes key features of RethinkDB such as its push model that allows applications to receive live updates from the database via changefeeds without polling, its query language, data modeling options using embedded documents or references, and how it can be used to build collaborative, multiplayer and analytics applications.
This document summarizes a presentation about RethinkDB, an open-source database built on JSON documents. It discusses key features of RethinkDB like supporting queries, being distributed, and ease of administration. It also compares RethinkDB to other NoSQL databases. The document provides examples of using the ReQL query language in Python to create and query a database. It demonstrates how to perform operations like counts, filters, and map-reduce queries.
Elastic Stack is a suite of open source tools for log analytics and data processing including Beats, Logstash, Elasticsearch, Kibana, Curator, and hosted cloud solutions.
Beats are lightweight data shippers that collect data from endpoints and send to Logstash or Elasticsearch. Logstash is used for data collection, transformation, and transport to Elasticsearch for storage and search. Kibana provides data visualization and dashboards. Curator manages Elasticsearch indices. The Elastic Stack can be self-hosted or used via cloud offerings.
The document discusses creating and organizing R projects and scripts. It explains how to create a new R project and script, set the working directory, and save scripts. Basic R syntax is covered, such as setting objects, installing and loading packages. The document also mentions importing different data formats and exploring the dplyr package for data handling.
The document discusses several key data structures used in the Linux kernel - task lists, KFIFO queues, IDR maps, RB-trees, and prio-trees. It provides an overview of how each is implemented and lists functions to initialize and manipulate each structure, such as adding/removing entries from lists and queues, getting/removing IDs from maps, and operations on binary search and priority search trees.
The document provides an introduction to the ELK stack, which is a collection of three open source products: Elasticsearch, Logstash, and Kibana. It describes each component, including that Elasticsearch is a search and analytics engine, Logstash is used to collect, parse, and store logs, and Kibana is used to visualize data with charts and graphs. It also provides examples of how each component works together in processing and analyzing log data.
Source files for this demo are available from archive at
http://nzpug.org/MeetingsAuckland/November2009
html version at http://halfbakery.net.nz/sphinx_demo/
This document provides an overview and instructions for installing and using Python on Arch Linux. It discusses the differences between Python 2 and Python 3, how to install each version, and how to deal with version conflicts. It also lists several integrated development environments (IDEs) for Python programming and describes various widget toolkits available for building graphical user interfaces.
This document discusses how iXsystems switched from using MediaWiki to Sphinx for documenting their open source projects like PC-BSD, Lumina, SysAdm, and FreeNAS. It provides an overview of Sphinx, including how it uses reStructuredText markup and generates documentation in multiple formats from text files. It describes iXsystems' workflow before Sphinx, the problems with it, and why they chose Sphinx. It also provides information on how to get started contributing documentation by forking the documentation repos on GitHub, making edits, and submitting pull requests for review.
Kernel Recipes 2016 - Kernel documentation: what we have and where it’s goingAnne Nicolas
The Linux kernel features an extensive array of, to put it kindly, somewhat disorganized documentation. A significant effort is underway to make things better, though. This talk will review the state of kernel documentation, cover the changes that are being made (including the adoption of a new system for formatted documentation), and discuss how interested developers can help.
Jonathan Corbet, LWN.net
[Mentor Graphics] A Perforce-based Automatic Document Generation SystemPerforce
The document describes an automatic documentation generation system used by DVT Technical Publications to generate product documentation libraries. The system utilizes Perforce for document version control and management. When documents are checked into Perforce, a pubs4d utility runs docgen to generate HTML, PDF, and update the documentation library (InfoHub). This provides a "correct-by-construction" InfoHub that is continually updated. The process allows for real-time updates and integration of last minute changes. Authors simply check documents in and out of Perforce to edit and release documentation.
final proposal-Implement and create new documentation toolchainParamkusham Shruthi
The document outlines a proposal to implement a new documentation tool chain for CentOS. The tool chain would make it easier for contributors to submit short-form documentation articles and push them to relevant upstream projects. It would synchronize content between git.centos.org and GitHub, support common markup formats, convert formats, tag content by upstream project, and include documentation on using the system. The proposal includes an implementation plan and timeline spread over 12 weeks.
Explains working with GNU Gettext i18n framework in linux environments. Details in this slide are generic and could be used for learning purpose only. It does not include details about processes follow in C-DAC (GIST).
The most hated thing a developer can imageine is writing documentation but on the other hand nothing can compare with a well documented source code if you want to change or extend some code. PhpDocumentor is one of many tools enabling you to parse the inline documentation and generate well structured and referenced documents. This tallk will show you how to get the most out of phpDocumentor and shall enable you to write fantastic documentation.
LAMP technology uses Linux as the operating system, Apache as the web server, MySQL as the database management system, and PHP as the server-side scripting language. Some advantages of LAMP include easy coding with PHP and MySQL, low-cost hosting, and the ability to develop applications locally. To install LAMP, one would download and extract the latest version of XAMPP for Linux and start the Apache and MySQL servers.
Kernel Recipes 2019 - Kernel documentation: past, present, and futureAnne Nicolas
This document discusses the current state and future plans for Linux kernel documentation. It notes that documentation has transitioned from DocBook to Sphinx/RST, improving formatting and integration. There are now over 3,000 documentation files and many kerneldoc comments. Future plans include converting remaining text files, updating ancient documents, improving organization by topic, integrating documentation better, and enhancing the documentation toolchain. The goal is to improve documentation to better help kernel developers, users and the community.
The document provides an overview of the Autotools including:
- Autotools are a set of tools including Automake, Autoconf, Libtool that generate portable build files for software projects.
- They use templates like Makefile.am and configure.ac that are transformed by the tools to produce final Makefiles and configure scripts.
- The build process involves running autogen.sh, configure, and make. Installation uses make DESTDIR to put files in a staging directory.
- This allows cross-compilation by making the build files independent of the host system architecture.
CSC 451551 Computer Networks Fall 2016Project 4 Softwar.docxannettsparrow
CSC 451/551: Computer Networks Fall 2016
Project 4: Software Defined Networks
1 Introduction
In this assignment you will learn how to use the OpenFlow protocol to program an SDN controller in
a Mininet emulated network using POX. The following sections will first introduce you to the tools
you will need to complete the assignment, guide you on how to install and use then, and lastly outline
what you will have to do.
2 Software Definined Networks (SDN)
A Software Defined Network (SDN) is a network with a centralized controller that dictates the flow
of network traffic. Unlike convention networks where each individual router or switch decided how to
forward packets, in an SDN a centralized controller tells each router or switch how to forward packets.
In this assignment you will have to write your own SDN controller.
3 OpenFlow
OpenFlow proposes a way for researchers to run experimental protocols in the networks they use every
day. It is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add
and remove flow entries. OpenFlow exploits the fact that most modern Ethernet switches and routers
contain flow-tables (typically built from TCAMs) that run at line-rate to implement firewalls, NAT,
QoS, and to collect statistics. An OpenFlow Switch consists of at least three parts:
a. a flow table, which keeps an entry for every flow and tells each switch how to process the flow.
b. a secure channel that connects the switch to a remote control process, namely the controller that
adds and removes flow entries from the flow table for different experiments allowing commands
and packets to be sent between a controller and the switch by using
c. a protocol, which provides an open and standard way for a controller to communicate with a
switch.
In the context of OpenFlow, a flow can be a TCP connection, or all packets from a particular MAC
address or IP address, or all packets with the same VLAN tag, or all packets from the same switch
port. Every flow entry in the flow table has 3 basic actions associated with it:
a. Forward the flows packets to a given port or ports, which means packets are to be routed through
the network.
b. Encapsulate and forward the flows packets to a controller, which either processes them or decides
if the flow needs to be added as a new entry to the flow table (i.e. if the packet is the first in a
new flow).
c. Drop the flows packets, which can be used for security issues, to curb denial-of-service attacks
and so on.
Read the OpenFlow whitepaper [1] and familiarize yourselves with the basic OpenFlow elements, before
continuing.
1
CSC 451/551: Computer Networks Fall 2016
4 Mininet & POX
Mininet is a python-based network emulation tool that you will use in this assignment to emulate
your own networks. Mininet has built in commands to create network topologies as well as an python
API to create your own custom topologies. For this assignment you will not need to learn how to
use.
Your data is getting bigger while your boss is getting anxious to have insights! This tutorial covers Apache Spark that makes data analytics fast to write and fast to run. Tackle big datasets quickly through a simple API in Python, and learn one programming paradigm in order to deploy interactive, batch, and streaming applications while connecting to data sources incl. HDFS, Hive, JSON, and S3.
1. Doxygen is a documentation generator that can generate documentation from comments in source code. It supports multiple programming languages and output formats like HTML, PDF, LaTeX and more.
2. Doxygen works by using special markup comments in source code. A configuration file is used to customize the output. Running Doxygen processes the source code and comments to generate documentation.
3. Automating documentation with Doxygen provides benefits like single sourcing code and docs, automatic updates, less errors, and better organization. It allows maintaining parity between code and documentation.
DevOoops (Increase awareness around DevOps infra security) - VoxxedDays Ticin...Gianluca Varisco
Gianluca Varisco presented on security issues related to DevOps infrastructure. He discussed potential exposures in tools like GitHub, revision control systems, continuous integration tools like Jenkins, AWS configuration files, client provisioning tools, Elasticsearch, Redis, and Memcache. Some key risks included exposing private source code repositories, misconfigured systems with default credentials, and services listening on public interfaces without authentication. He emphasized the importance of access control, updating software, changing default keys, and firewall configuration.
This document summarizes a presentation about RethinkDB, an open-source database built on JSON documents. It discusses key features of RethinkDB like supporting queries, being distributed, and ease of administration. It also compares RethinkDB to other NoSQL databases. The document provides examples of using the ReQL query language in Python to create and query a database. It demonstrates how to perform operations like counts, filters, and map-reduce queries.
Elastic Stack is a suite of open source tools for log analytics and data processing including Beats, Logstash, Elasticsearch, Kibana, Curator, and hosted cloud solutions.
Beats are lightweight data shippers that collect data from endpoints and send to Logstash or Elasticsearch. Logstash is used for data collection, transformation, and transport to Elasticsearch for storage and search. Kibana provides data visualization and dashboards. Curator manages Elasticsearch indices. The Elastic Stack can be self-hosted or used via cloud offerings.
The document discusses creating and organizing R projects and scripts. It explains how to create a new R project and script, set the working directory, and save scripts. Basic R syntax is covered, such as setting objects, installing and loading packages. The document also mentions importing different data formats and exploring the dplyr package for data handling.
The document discusses several key data structures used in the Linux kernel - task lists, KFIFO queues, IDR maps, RB-trees, and prio-trees. It provides an overview of how each is implemented and lists functions to initialize and manipulate each structure, such as adding/removing entries from lists and queues, getting/removing IDs from maps, and operations on binary search and priority search trees.
The document provides an introduction to the ELK stack, which is a collection of three open source products: Elasticsearch, Logstash, and Kibana. It describes each component, including that Elasticsearch is a search and analytics engine, Logstash is used to collect, parse, and store logs, and Kibana is used to visualize data with charts and graphs. It also provides examples of how each component works together in processing and analyzing log data.
Source files for this demo are available from archive at
http://nzpug.org/MeetingsAuckland/November2009
html version at http://halfbakery.net.nz/sphinx_demo/
This document provides an overview and instructions for installing and using Python on Arch Linux. It discusses the differences between Python 2 and Python 3, how to install each version, and how to deal with version conflicts. It also lists several integrated development environments (IDEs) for Python programming and describes various widget toolkits available for building graphical user interfaces.
This document discusses how iXsystems switched from using MediaWiki to Sphinx for documenting their open source projects like PC-BSD, Lumina, SysAdm, and FreeNAS. It provides an overview of Sphinx, including how it uses reStructuredText markup and generates documentation in multiple formats from text files. It describes iXsystems' workflow before Sphinx, the problems with it, and why they chose Sphinx. It also provides information on how to get started contributing documentation by forking the documentation repos on GitHub, making edits, and submitting pull requests for review.
Kernel Recipes 2016 - Kernel documentation: what we have and where it’s goingAnne Nicolas
The Linux kernel features an extensive array of, to put it kindly, somewhat disorganized documentation. A significant effort is underway to make things better, though. This talk will review the state of kernel documentation, cover the changes that are being made (including the adoption of a new system for formatted documentation), and discuss how interested developers can help.
Jonathan Corbet, LWN.net
[Mentor Graphics] A Perforce-based Automatic Document Generation SystemPerforce
The document describes an automatic documentation generation system used by DVT Technical Publications to generate product documentation libraries. The system utilizes Perforce for document version control and management. When documents are checked into Perforce, a pubs4d utility runs docgen to generate HTML, PDF, and update the documentation library (InfoHub). This provides a "correct-by-construction" InfoHub that is continually updated. The process allows for real-time updates and integration of last minute changes. Authors simply check documents in and out of Perforce to edit and release documentation.
final proposal-Implement and create new documentation toolchainParamkusham Shruthi
The document outlines a proposal to implement a new documentation tool chain for CentOS. The tool chain would make it easier for contributors to submit short-form documentation articles and push them to relevant upstream projects. It would synchronize content between git.centos.org and GitHub, support common markup formats, convert formats, tag content by upstream project, and include documentation on using the system. The proposal includes an implementation plan and timeline spread over 12 weeks.
Explains working with GNU Gettext i18n framework in linux environments. Details in this slide are generic and could be used for learning purpose only. It does not include details about processes follow in C-DAC (GIST).
The most hated thing a developer can imageine is writing documentation but on the other hand nothing can compare with a well documented source code if you want to change or extend some code. PhpDocumentor is one of many tools enabling you to parse the inline documentation and generate well structured and referenced documents. This tallk will show you how to get the most out of phpDocumentor and shall enable you to write fantastic documentation.
LAMP technology uses Linux as the operating system, Apache as the web server, MySQL as the database management system, and PHP as the server-side scripting language. Some advantages of LAMP include easy coding with PHP and MySQL, low-cost hosting, and the ability to develop applications locally. To install LAMP, one would download and extract the latest version of XAMPP for Linux and start the Apache and MySQL servers.
Kernel Recipes 2019 - Kernel documentation: past, present, and futureAnne Nicolas
This document discusses the current state and future plans for Linux kernel documentation. It notes that documentation has transitioned from DocBook to Sphinx/RST, improving formatting and integration. There are now over 3,000 documentation files and many kerneldoc comments. Future plans include converting remaining text files, updating ancient documents, improving organization by topic, integrating documentation better, and enhancing the documentation toolchain. The goal is to improve documentation to better help kernel developers, users and the community.
The document provides an overview of the Autotools including:
- Autotools are a set of tools including Automake, Autoconf, Libtool that generate portable build files for software projects.
- They use templates like Makefile.am and configure.ac that are transformed by the tools to produce final Makefiles and configure scripts.
- The build process involves running autogen.sh, configure, and make. Installation uses make DESTDIR to put files in a staging directory.
- This allows cross-compilation by making the build files independent of the host system architecture.
CSC 451551 Computer Networks Fall 2016Project 4 Softwar.docxannettsparrow
CSC 451/551: Computer Networks Fall 2016
Project 4: Software Defined Networks
1 Introduction
In this assignment you will learn how to use the OpenFlow protocol to program an SDN controller in
a Mininet emulated network using POX. The following sections will first introduce you to the tools
you will need to complete the assignment, guide you on how to install and use then, and lastly outline
what you will have to do.
2 Software Definined Networks (SDN)
A Software Defined Network (SDN) is a network with a centralized controller that dictates the flow
of network traffic. Unlike convention networks where each individual router or switch decided how to
forward packets, in an SDN a centralized controller tells each router or switch how to forward packets.
In this assignment you will have to write your own SDN controller.
3 OpenFlow
OpenFlow proposes a way for researchers to run experimental protocols in the networks they use every
day. It is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add
and remove flow entries. OpenFlow exploits the fact that most modern Ethernet switches and routers
contain flow-tables (typically built from TCAMs) that run at line-rate to implement firewalls, NAT,
QoS, and to collect statistics. An OpenFlow Switch consists of at least three parts:
a. a flow table, which keeps an entry for every flow and tells each switch how to process the flow.
b. a secure channel that connects the switch to a remote control process, namely the controller that
adds and removes flow entries from the flow table for different experiments allowing commands
and packets to be sent between a controller and the switch by using
c. a protocol, which provides an open and standard way for a controller to communicate with a
switch.
In the context of OpenFlow, a flow can be a TCP connection, or all packets from a particular MAC
address or IP address, or all packets with the same VLAN tag, or all packets from the same switch
port. Every flow entry in the flow table has 3 basic actions associated with it:
a. Forward the flows packets to a given port or ports, which means packets are to be routed through
the network.
b. Encapsulate and forward the flows packets to a controller, which either processes them or decides
if the flow needs to be added as a new entry to the flow table (i.e. if the packet is the first in a
new flow).
c. Drop the flows packets, which can be used for security issues, to curb denial-of-service attacks
and so on.
Read the OpenFlow whitepaper [1] and familiarize yourselves with the basic OpenFlow elements, before
continuing.
1
CSC 451/551: Computer Networks Fall 2016
4 Mininet & POX
Mininet is a python-based network emulation tool that you will use in this assignment to emulate
your own networks. Mininet has built in commands to create network topologies as well as an python
API to create your own custom topologies. For this assignment you will not need to learn how to
use.
Your data is getting bigger while your boss is getting anxious to have insights! This tutorial covers Apache Spark that makes data analytics fast to write and fast to run. Tackle big datasets quickly through a simple API in Python, and learn one programming paradigm in order to deploy interactive, batch, and streaming applications while connecting to data sources incl. HDFS, Hive, JSON, and S3.
1. Doxygen is a documentation generator that can generate documentation from comments in source code. It supports multiple programming languages and output formats like HTML, PDF, LaTeX and more.
2. Doxygen works by using special markup comments in source code. A configuration file is used to customize the output. Running Doxygen processes the source code and comments to generate documentation.
3. Automating documentation with Doxygen provides benefits like single sourcing code and docs, automatic updates, less errors, and better organization. It allows maintaining parity between code and documentation.
DevOoops (Increase awareness around DevOps infra security) - VoxxedDays Ticin...Gianluca Varisco
Gianluca Varisco presented on security issues related to DevOps infrastructure. He discussed potential exposures in tools like GitHub, revision control systems, continuous integration tools like Jenkins, AWS configuration files, client provisioning tools, Elasticsearch, Redis, and Memcache. Some key risks included exposing private source code repositories, misconfigured systems with default credentials, and services listening on public interfaces without authentication. He emphasized the importance of access control, updating software, changing default keys, and firewall configuration.
1. This document discusses how to create an instant website using Python, Sphinx, and GitHub Pages by automating documentation through continuous integration and deployment workflows.
2. Key steps include setting up a Python virtual environment, installing Sphinx, configuring Sphinx deployment, building documentation locally, setting up GitHub Pages in a GitHub repository, and pushing changes to deploy updates automatically.
3. Automating documentation through these techniques provides benefits like keeping documentation close to code changes, tracking documentation issues like code, enabling iterative improvements, and allowing many contributors.
Linux is a widely used open-source operating system that can run on desktops, servers, and embedded devices. It includes basic commands like cal, date, cd, and cat. The document also provides overviews of installing and configuring the Apache web server, PHP, and MySQL to set up a basic LAMP stack on a Linux system.
Power point on linux commands,appache,php,mysql,html,css,web 2.0venkatakrishnan k
Linux is a widely used open-source operating system that can run on desktops, servers, and embedded devices. The document provides basic commands for Linux like cal to view a calendar, date to check the date and time, and cd to change directories. It also gives an overview of installing and configuring web servers like Apache and PHP as well as databases like MySQL on a Linux system.
The document provides information about LAMP technology and its components - Linux, Apache HTTP Server, MySQL, and PHP. It discusses the advantages of using LAMP including easy coding with PHP/MySQL and low cost hosting. It also provides installation instructions and examples of basic commands for Linux, Apache, MySQL, and PHP.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Introducing BoxLang : A new JVM language for productivity and modularity!Ortus Solutions, Corp
Just like life, our code must adapt to the ever changing world we live in. From one day coding for the web, to the next for our tablets or APIs or for running serverless applications. Multi-runtime development is the future of coding, the future is to be dynamic. Let us introduce you to BoxLang.
Dynamic. Modular. Productive.
BoxLang redefines development with its dynamic nature, empowering developers to craft expressive and functional code effortlessly. Its modular architecture prioritizes flexibility, allowing for seamless integration into existing ecosystems.
Interoperability at its Core
With 100% interoperability with Java, BoxLang seamlessly bridges the gap between traditional and modern development paradigms, unlocking new possibilities for innovation and collaboration.
Multi-Runtime
From the tiny 2m operating system binary to running on our pure Java web server, CommandBox, Jakarta EE, AWS Lambda, Microsoft Functions, Web Assembly, Android and more. BoxLang has been designed to enhance and adapt according to it's runnable runtime.
The Fusion of Modernity and Tradition
Experience the fusion of modern features inspired by CFML, Node, Ruby, Kotlin, Java, and Clojure, combined with the familiarity of Java bytecode compilation, making BoxLang a language of choice for forward-thinking developers.
Empowering Transition with Transpiler Support
Transitioning from CFML to BoxLang is seamless with our JIT transpiler, facilitating smooth migration and preserving existing code investments.
Unlocking Creativity with IDE Tools
Unleash your creativity with powerful IDE tools tailored for BoxLang, providing an intuitive development experience and streamlining your workflow. Join us as we embark on a journey to redefine JVM development. Welcome to the era of BoxLang.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
10. x/M ac
Linu
# easy_install sphinx
or
# port install py26-sphinx
dows 1.Python
Win
2.easy_install
3.easy_install sphinx
11. Introduction
============
This is the documentation for the Sphinx documentation builder. Sphinx is a
tool that translates a set of reStructuredText_ source files into various output
formats, automatically producing cross-references, indices etc. That is, if
you have a directory containing a bunch of reST-formatted documents (and
possibly subdirectories of docs in there as well), Sphinx can generate a
nicely-organized arrangement of HTML files (in some other directory) for easy
browsing and navigation. But from the same source, it can also generate a
LaTeX file that you can compile into a PDF version of the documents.
The focus is on hand-written documentation, rather than auto-generated API docs.
Though there is limited support for that kind of docs as well (which is intended
to be freely mixed with hand-written content), if you need pure API docs have a
look at `Epydoc <http://epydoc.sf.net/>`_, which also understands reST.
Conversion from other systems
-----------------------------
This section is intended to collect helpful hints for those wanting to migrate
to reStructuredText/Sphinx from other documentation systems.
* Gerard Flanagan has written a script to convert pure HTML to reST; it can be
found at `BitBucket
<http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py>`_.
* For converting the old Python docs to Sphinx, a converter was written which
can be found at `the Python SVN repository
<http://svn.python.org/projects/doctools/converter>`_. It contains generic
code to convert Python-doc-style LaTeX markup to Sphinx reST.
* Marcin Wojdyr has written a script to convert Docbook to reST with Sphinx
markup; it is at `Google Code <http://code.google.com/p/db2rst/>`_.
Prerequisites
-------------
Sphinx needs at least **Python 2.4** to run. If you like to have source code
highlighting support, you must also install the Pygments_ library, which you can
do via setuptools' easy_install. Sphinx should work with docutils version 0.4
or some (not broken) SVN trunk snapshot.