Cron and at are tools for automating jobs and scripts on Linux systems. Cron is used for recurring jobs run on a schedule, while at is used for jobs that need to run only once at a specific time. The cron daemon crond handles cron jobs, while the at daemon atd handles at jobs. Commands like crontab -e and crontab -l are used to edit and view cron job schedules. Examples show how to set up jobs to run backups, reports, scripts, and other tasks on a variety of schedules using cron and at.
Getting started with setting up embedded platform requires audience to understand some of the key aspects of Linux. This presentation deals with basics of Linux as an OS, Linux commands, vi editor, Shell features like redirection, pipes and shell scripting
Agenda:
In this talk we will present various locking mechanisms implemented in the linux kernel.
From System V locks to raw spinlocks and the RT patch.
Speaker:
Mark Veltzer - CTO of Hinbit and a senior instructor at John Bryce. Mark is also a member of the Free Source Foundation and contributes to many free projects.
https://github.com/veltzer
This presentation examines the way files are stored in Linux following the File System Hierarchy. It also addresses the recent proposals by Fedora to change this to merge bin directories.
Getting started with setting up embedded platform requires audience to understand some of the key aspects of Linux. This presentation deals with basics of Linux as an OS, Linux commands, vi editor, Shell features like redirection, pipes and shell scripting
Agenda:
In this talk we will present various locking mechanisms implemented in the linux kernel.
From System V locks to raw spinlocks and the RT patch.
Speaker:
Mark Veltzer - CTO of Hinbit and a senior instructor at John Bryce. Mark is also a member of the Free Source Foundation and contributes to many free projects.
https://github.com/veltzer
This presentation examines the way files are stored in Linux following the File System Hierarchy. It also addresses the recent proposals by Fedora to change this to merge bin directories.
Here, you can learn all information about Shell Script.
1. What is Shell Script?
2. Types of Shell Script.
3. Use of Shell Script.
4. Command line of Shell Script.
5. Example of Shell Script.
Let's trace Linux Lernel with KGDB @ COSCUP 2021Jian-Hong Pan
https://coscup.org/2021/en/session/39M73K
https://www.youtube.com/watch?v=L_Gyvdl_d_k
Engineers have plenty of debug tools for user space programs development, code tracing, debugging and analyzing. Except “printk”, do we have any other debug tools for Linux kernel development? The “KGDB” mentioned in Linux kernel document provides another possibility.
Will share how to experiment with the KGDB in a virtual machine. And, use GDB + OpenOCD + JTAG + Raspberry Pi in the real environment as the demo in this talk.
開發 user space 軟體時,工程師們有方便的 debug 工具進行查找、分析、除錯。但在 Linux kernel 的開發,除了 printk 外,還可以有哪些工具可以使用呢?從 Linux kernel document 可以看到 KGDB 相關的資訊,提供了在 kernel 除錯時的另一個可能性。
本次將分享,從建立最簡單環境的虛擬機機開始,到實際使用 GDB + OpenOCD + JTAG + Raspberry Pi 當作展示範例。
Présentation aux Geeks Anonymes Liège par Cyril Soldani, le 13 décembre 2017.
Page des Geeks Anonymes : https://www.recherche.uliege.be/cms/c_9463913/fr/geeks-anonymes
Agenda:
In this session, Shmulik Ladkani discusses the kernel's net_device abstraction, its interfaces, and how net-devices interact with the network stack. The talk covers many of the software network devices that exist in the Linux kernel, the functionalities they provide and some interesting use cases.
Speaker:
Shmulik Ladkani is a Tech Lead at Ravello Systems.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
51966 coffees and billions of forwarded packets later, with millions of homes running his software, Shmulik left his position as Jungo’s lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud service. He's now focused around virtualization systems, network virtualization and SDN.
The conversion of the ARM Linux kernel over to the Device Tree as the mechanism to describe the hardware has been a significant change for ARM kernel developers. Nowadays, all developers porting the Linux kernel on new ARM platforms, either new SOCs or new boards, have to work with the Device Tree. Based on practical examples, this talk intends to provide a ""getting started guide"" for newcomers in the Device Tree world: what is the Device Tree? How is it written and compiled? How do the bootloader and kernel interact? How are Device Tree bindings written and documented? What are the best practices for writing Device Trees and their bindings?
Video available at https://www.youtube.com/watch?v=m_NyYEBxfn8.
I have described all about linux OS starting from basics.
I guess this PPT will really be very very helpful for you guys.
This was one of the most appreciable PPT in my time when i presented it in my class.
ULTIMA TECNOLOGIA CON SISTEMA DE CONTROL Y AHORRO!! PiD, Sistema inteligente que mantiene la actividad del revelador constante y así garantiza las máximas prestaciones en planchas CTP. El sistema PiD, gracias a su análisis en continuo de los principales parámetros, permite determinar el valor óptimo y justo de dosificación en el tiempo.
RRD Ahorro; Reducción del consumo del revelador en un 65% RRD es un sistema creado para el ahorro de consumo del revelador conjuntamente con el sistema PiD. Basado en la reutilización del revelador que rebosa permite aumentar la eficiencia del uso de químico.
GRAFONLINE Soporte con control remoto del sistema al software de la procesadora, ajustar y analizar sus parámetros, a través de una conexión a internet.
En cuanto a los accesorios como el
Wasted Developer Processor WDT; es el dispositivo que permite reducir la cantidad de producto químico solución en un 80%, reducir en gran medida el costo de eliminación de residuos del liquido revelador lo que permite el uso de bajo costo
del dispositivo con apenas asimismo el agua separada del liquido revelador desperdiciado se puede disponer de manera segura.
Pid-5000 sistema de revelado Inteligente es una función diseñado para mantener la consistencia de la liquido revelador
con el fin de lograr una calidad de procesado de planchas CTP de alta período más largo de tiempo; Pid-5000 'Ex. sólo puede ser operativa sobre GRAFXTRON, incluye en el Procesador de la serie CDN
DFD Developer Cleansing Device; Proporciona limpieza profunda para el liquido revelador, adicionandole mayor eficiencia al revelador, reduce la frecuencia de procesado de la placa de cambio de filtro.
- Capacidad de filtrado fuerte, Más de 20 veces más eficiente que el filtro incorporado en el procesado de la placa.
- No altera característica química del revelador.
- No afecta el desarrollo de la temperatura.
- Función de auto limpieza para el sistema de filtrado.
Sólo apto para la instalación con el sistema de procesamiento GRAFXTRON.
Here, you can learn all information about Shell Script.
1. What is Shell Script?
2. Types of Shell Script.
3. Use of Shell Script.
4. Command line of Shell Script.
5. Example of Shell Script.
Let's trace Linux Lernel with KGDB @ COSCUP 2021Jian-Hong Pan
https://coscup.org/2021/en/session/39M73K
https://www.youtube.com/watch?v=L_Gyvdl_d_k
Engineers have plenty of debug tools for user space programs development, code tracing, debugging and analyzing. Except “printk”, do we have any other debug tools for Linux kernel development? The “KGDB” mentioned in Linux kernel document provides another possibility.
Will share how to experiment with the KGDB in a virtual machine. And, use GDB + OpenOCD + JTAG + Raspberry Pi in the real environment as the demo in this talk.
開發 user space 軟體時,工程師們有方便的 debug 工具進行查找、分析、除錯。但在 Linux kernel 的開發,除了 printk 外,還可以有哪些工具可以使用呢?從 Linux kernel document 可以看到 KGDB 相關的資訊,提供了在 kernel 除錯時的另一個可能性。
本次將分享,從建立最簡單環境的虛擬機機開始,到實際使用 GDB + OpenOCD + JTAG + Raspberry Pi 當作展示範例。
Présentation aux Geeks Anonymes Liège par Cyril Soldani, le 13 décembre 2017.
Page des Geeks Anonymes : https://www.recherche.uliege.be/cms/c_9463913/fr/geeks-anonymes
Agenda:
In this session, Shmulik Ladkani discusses the kernel's net_device abstraction, its interfaces, and how net-devices interact with the network stack. The talk covers many of the software network devices that exist in the Linux kernel, the functionalities they provide and some interesting use cases.
Speaker:
Shmulik Ladkani is a Tech Lead at Ravello Systems.
Shmulik started his career at Jungo (acquired by NDS/Cisco) implementing residential gateway software, focusing on embedded Linux, Linux kernel, networking and hardware/software integration.
51966 coffees and billions of forwarded packets later, with millions of homes running his software, Shmulik left his position as Jungo’s lead architect and joined Ravello Systems (acquired by Oracle) as tech lead, developing a virtual data center as a cloud service. He's now focused around virtualization systems, network virtualization and SDN.
The conversion of the ARM Linux kernel over to the Device Tree as the mechanism to describe the hardware has been a significant change for ARM kernel developers. Nowadays, all developers porting the Linux kernel on new ARM platforms, either new SOCs or new boards, have to work with the Device Tree. Based on practical examples, this talk intends to provide a ""getting started guide"" for newcomers in the Device Tree world: what is the Device Tree? How is it written and compiled? How do the bootloader and kernel interact? How are Device Tree bindings written and documented? What are the best practices for writing Device Trees and their bindings?
Video available at https://www.youtube.com/watch?v=m_NyYEBxfn8.
I have described all about linux OS starting from basics.
I guess this PPT will really be very very helpful for you guys.
This was one of the most appreciable PPT in my time when i presented it in my class.
ULTIMA TECNOLOGIA CON SISTEMA DE CONTROL Y AHORRO!! PiD, Sistema inteligente que mantiene la actividad del revelador constante y así garantiza las máximas prestaciones en planchas CTP. El sistema PiD, gracias a su análisis en continuo de los principales parámetros, permite determinar el valor óptimo y justo de dosificación en el tiempo.
RRD Ahorro; Reducción del consumo del revelador en un 65% RRD es un sistema creado para el ahorro de consumo del revelador conjuntamente con el sistema PiD. Basado en la reutilización del revelador que rebosa permite aumentar la eficiencia del uso de químico.
GRAFONLINE Soporte con control remoto del sistema al software de la procesadora, ajustar y analizar sus parámetros, a través de una conexión a internet.
En cuanto a los accesorios como el
Wasted Developer Processor WDT; es el dispositivo que permite reducir la cantidad de producto químico solución en un 80%, reducir en gran medida el costo de eliminación de residuos del liquido revelador lo que permite el uso de bajo costo
del dispositivo con apenas asimismo el agua separada del liquido revelador desperdiciado se puede disponer de manera segura.
Pid-5000 sistema de revelado Inteligente es una función diseñado para mantener la consistencia de la liquido revelador
con el fin de lograr una calidad de procesado de planchas CTP de alta período más largo de tiempo; Pid-5000 'Ex. sólo puede ser operativa sobre GRAFXTRON, incluye en el Procesador de la serie CDN
DFD Developer Cleansing Device; Proporciona limpieza profunda para el liquido revelador, adicionandole mayor eficiencia al revelador, reduce la frecuencia de procesado de la placa de cambio de filtro.
- Capacidad de filtrado fuerte, Más de 20 veces más eficiente que el filtro incorporado en el procesado de la placa.
- No altera característica química del revelador.
- No afecta el desarrollo de la temperatura.
- Función de auto limpieza para el sistema de filtrado.
Sólo apto para la instalación con el sistema de procesamiento GRAFXTRON.
BITS: Introduction to Linux - Text manipulation tools for bioinformaticsBITS
This slide is part of the BITS training session: "Introduction to linux for life sciences."
See http://www.bits.vib.be/index.php?option=com_content&view=article&id=17203890%3Abioperl-additional-material&catid=84&Itemid=284
Part 5 of "Introduction to Linux for Bioinformatics": Working the command lin...Joachim Jacob
This is part 5 of the training "introduction to linux for bioinformatics". Here we introduce more advanced use on the command line (piping, redirecting) and provide you a selection of GNU text mining and analysis tools that assist you tremendously in handling your bioinformatics data. Interested in following this training session? Contact me at http://www.jakonix.be/contact.html
Using a Linux server for an organization infrastructure has many undeniable benefits: stability, security, hardware performance, TCO, freedom.
But that doesn't mean there aren't some drawbacks (usually outweighed by the benefits), that we should consider and try to reduce or remove.
Introducing a Linux server in an organization usually has a pretty significant impact: new process, new interface to learn (CLI), new commands and services, new way of doing pretty much everything on the infrastructure. Unexperienced (on Linux) members of the IT team will be most likely skeptical of such a new solution, therefore we have to deal with the resistance to change, which is common and understandable, or we will end up with part of IT team members being left behind by these changes to their work routine.
Improving this situation is mandatory and that’s exactly the mission of the NethServer project: make a Linux distribution for servers more accessible, easier to adopt and simpler to understand, thanks to a powerful and extensible web interface that simplifies common administration tasks.
Because we believe simplicity can still be powerful.
If you're looking for the top 100 linux interview questions and answers, then you've come to the right place. We at hirist have compiled a list of the top linux interview questions that are asked by companies like TCS, Infosys, Wipro, HCL and Cognizant and put it together in a pdf format that can be downloaded for free.
You can easily download this free linux interview questions pdf file and use it to prepare for an interview. It doesn't matter if you're looking for linux interview questions and answers for freshers or linux interview questions and answers for experienced because this presentation will cater to both segments.
This list includes Linux interview questions and answers in the below categories:
top 100 linux interview questions
kickstart linux interview questions
interview questions on linux boot process
top 100 linux interview questions answers
linux interview questions 2009
linux installation interview questions
interview question on linux commands
linux interview topics
top 50 linux interview questions
Top 30 linux system admin interview questions & answers
Top 25 Unix interview questions with answers
Linux Interview Questions
Practical Interview Questions and Answers on Linux
Top 100 Informatica Interview Questions
10 Linux and UNIX Interview Questions and Answers
linux interview questions and answers for freshers
linux interview questions and answers pdf
linux interview questions and answers pdf free download
linux interview questions and answers for experienced pdf
linux l2 interview questions and answers
linux system administrator interview questions and answers
basic linux interview questions and answers
red hat linux interview questions and answers
This Presentation is an introducing to the IT automation environment, starting from a sys admin point of view.
The purpose of these tools is to help in troubleshooting and handling an heterogeneous it environment to ensure availability and reliability.
allscripts.pdf
-----schedule.sh------
#!/bin/bash
#ssh into node
sudo ssh [email protected]
#run the node setup for a specific node 1st day of every month
* * 1 * * /home/cit481/node_setup.sh > /var/spool/cron
#run a backup of Experiments directory of the node at 9 pm every day and
save the cronjob to the crontab -e .
0 21 * * * /home/cit481/backup.sh >> [email protected]
/var/spool/cron/contabs
exit
#to run this script in the host machine command line write ./schedule.sh
-----local_setup.sh-------
#!/bin/bash
#This is the local setup script that updates your system and creates and
installs the ssh server
#update system
yum update -y yum
yum update -y
#ssh server
yum install ssh
yum install openssh-server
exit
-----cleanup.sh------
#!/bin/bash
#backup experiments directory in remote machine to home directory in
local host
rsync -a [email protected]:/cit481/Experiments/ cit480-
[email protected]/home
#if experiemnts directory exists then remove it if not then exit
do
if[-d #Experiments]
then
rm -R $Experiments
echo "Directory Experiemnts found and deleted."
else
echo "Directory Experiments not found."
fi
done<dir_list
exit
-------backup.sh--------
#!/bin/bash
#backup file from node to the local host
#adding a time stamp to the backup file
#TIME='date+%b-$d-%y'
#FILENAME=backup_log$TIME
scp -r /home/cit481/Experiments [email protected]:/home/cit480-
4/Desktop
-----dir_list----
/cit481/Experiments
-------node_setup.sh----------
#!/bin/bash
#This is the node setup script that updates your system and created a
fresh install of
#all the packages
#ssh to the machine
ssh [email protected]
#update the system
yum update -y yup
yum update -y
#install all packages needed
#compiler
yum install -y gcc
#login shell for ssh account
yum install -y git
#text-editor
yum install -y vim
yum install -y gtest
#command interpreter
yum install -y zsh
yum install -y ping
yum install -y traceroute
yum install -y tcpdump
yum install -y mysql
yum install -y ftp
yum install -y gzip
yum install -y man
yum install -y less
yum install -y make
yum install -y rpm-build
yum install -y iperf python
yum install -y nc
#ssh server
yum install ssh
yum install openssh-server
exit
---myprogram.sh----
#!/bin/bash
#the experiemnt that i will be using for this project
echo "Hello World!"
allscripts.txt
-----schedule.sh------
#!/bin/bash
#ssh into node
sudo ssh [email protected]
#run the node setup for a specific node 1st day of every month
* * 1 * * /home/cit481/node_setup.sh > /var/spool/cron
#run a backup of Experiments directory of the node at 9 pm every day and save the cronjob to the crontab -e .
0 21 * * * /home/cit481/backup.sh >> [email protected] /var/spool/cron/contabs
exit
#to run this script in the host machine command line write ./schedule.sh
-----local_setup.sh-------
#!/bin/bash
#This is the local.
Building a DSL with GraalVM (VoxxedDays Luxembourg)Maarten Mulders
GraalVM is a virtual machine that can run many languages on top of the Java Virtual Machine. It comes with support for JavaScript, Ruby, Python… But what if you're building a DSL, or your language is not listed? Fear not!
In this session we'll discover what it takes to run another language in GraalVM. Using GraalVM, we don't only get a fast runtime, but we'll also get great tool support. With Brainfuck as an example, we'll see how we can run guest languages inside Java applications. It might not bring us profit, but at least it will bring some fun.
Docker is the next best thing in deployment and infrastructure management. This talk will go over a brief introduction of the Docker objects and how they interact.
How to make a large C++-code base manageablecorehard_by
My talk will cover how to work with a large C++ code base professionally. How to write code for debuggability, how to work effectively even due the long C++ compilation times, how and why to utilize the STL algorithms, how and why to keep interfaces clean. In addition, general convenience methods like making wrappers to make the code less error prone (for example ranged integers, listeners, concurrent values). Also a little bit about common architecture patterns to avoid (virtual classes), and patterns to encourage (pure functions), and how std::function/lambda functions can be used to make virtual classes copyable.
Fine-tuning your development environment means more than just getting your editor set up just so -- it means finding and setting up a variety of tools to take care of the mundane housekeeping chores that you have to do -- so you have more time to program, of course! I'll share the benefits of a number of yak shaving expeditions, including using App::GitGot to batch manage _all_ your git repos, App::MiseEnPlace to automate getting things _just_ so in your working environment, and a few others as time allows.
Delivered at OpenWest 2016, 13 July 2016
Paper Presentation - "Your Botnet is my Botnet : Analysis of a Botnet Takeover"Jishnu Pradeep
Presentation based on Paper titled: "Your botnet is my botnet: Analysis of a botnet takeover". The original authors are Brett Stone-Gross, Marco Cova, Lorenzo Cavallaro, Bob Gilbert, Martin Szydlowski,
Richard Kemmerer, Christopher Kruegel, and Giovanni Vigna.
Cloud computing is rapidly emerging due to the provisioning of elastic, flexible, and on demand storage and computing services for customers. The data is usually encrypted before storing to the cloud. The access control, key management, encryption, and decryption processes are handled by the customers to ensure data security. A single key shared between all group members will result in the access of past data to a newly joining member. The aforesaid situation violates the confidentiality and the principle of least privilege.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
2. Cron is a deamon called "crond" used to schedule and execute jobs
or scripts automatically without user intervention. Cron, also
referred to as crontab, can help with automated log rotation,
scheduled reporting and running of scripts at off times of the day.
Cron is primarily used for jobs needing to be executed over and over
like log rotation every week or a report email sent out every
morning.
An addtitional tool one can use is called "at" and is used to execute
a job only once. "at" is very useful, for example if you want run a
backup job starting at 8pm and you expect to be leaving at 5:30pm.
In OpenBSD and FreeBSD the daemon cron can handel cron jobs as
well at "at" jobs. In Linux the "crond" daemon is used for cron jobs
only and a separate daemon "atd" is used for at jobs. Make sure the
correct daemon is running for the job scheduler you are looking to
use.
3. To use cron tab there are two important commands:
crontab -e edit your crontab entries
crontab -l print the entries from crontab
4. Here is an example of a very easy to reference header for your
crontab. You have the descriptions for every time slot and what every
slot will accept. This example also specifies the shell and the path
making sure the binaries you run can be found. The last line is an
example of running "newsyslog" Sunday at midnight. You are welcome
to cut/paste this block to the top of your cron tab.
5. Lets take a look at some examples in order of simple to alittle more
complex. Notice all of the binaries are using their absolute paths. Cron
uses its own PATH variable and it is a safe practice to always use absolute
paths in your crontab. This is to avoid confusion.
Rotate logs weekly at 12midnight. (just like the example above)
00 0 * * 0 /usr/bin/newsyslog
Rotate logs weekly at 12midnight. (instead of 0 for the day of the week
we can use Sun for Sunday)
00 0 * * Sun /usr/bin/newsyslog
Mail a report to root everyday at 11:59pm (23:59).
59 23 * * * /usr/local/bin/pflogsumm -d today /var/log/maillog | mail -s "mail
report" root
6. Run the backup scripts at 5am on the 3rd (Wed) and 5th (Fri) day of the week.
Send any errors to /dev/null
00 5 * * 3,5 /tools/BACKUP_script.sh >> /dev/null 2>&1
Compress backup files at 6am on the 1st and 15th of the month.
00 6 1,15 * * /tools/BACKUP_compress.sh
Refresh the Squid ad blocker server list every 3 days at 12:05am.
05 0 * * */3 /tools/ad_servers_newlist.sh
Clear the blocked hosts list at 3:23pm (15:23) every Monday only on even
numbered months.
23 15 * */2 1 /tools/clear_blocked_hosts.sh
7. Run a script at 8:45pm (20:45) on 2nd and the 16th only in the months of
January and April.
45 20 2,16 1,4 * /tools/a_script.sh
Run a script every day at 8:45pm (20:45) and add a random sleep time
between 0 and 300 seconds.
45 20 * * * sleep $(($RANDOM % 300)); /tools/a_script.sh
Run the script at 23:59 (11:59pm) on the last day of the month.
59 23 28-31 * * [ $(date -d +1day +%d) -eq 1 ] && /tools/a_script.sh
8. To run jobs only once it is easier to use "at" than to setup and cron job and
then go back and remove it once the job has ran. Remember you need to
have the "atd" daemon running on Linux systems to run "at" jobs. On OpenBSD
or FreeBSD system the "crond" daemon will handle "cron" and "at" jobs.
To run an "at" job you need to fist tell "at" what time to run the job.
Remember to use absolute paths to avoid confusion. Once to execute att with
the time and date you will be put into an "at" shell. This is where you will
enter the commands you want to execute, one command per line to make it
simple.
In this example we will be executing a set of commands at 5am on January
23rd. The backup script will run and then we will send out mail to root. To
close the "at" shell and save the job you must type Ctrl-d (the control key
with the lowercase d).