This document discusses basic file management commands in Linux. It covers commands like cp, mv, rm, mkdir and touch to copy, move, remove, create directories and update file timestamps. It provides examples of using each command, explaining how to copy and move files between directories, remove both files and directories, and use wildcards and recursion. Special characters for file globbing like asterisks and question marks are also described.
This document provides an overview of basic file management tasks in Linux, including copying, moving, removing files and directories. It discusses commands like cp, mv, rm, mkdir, touch and their usage for tasks such as copying files and directories recursively, removing empty or non-empty directories, updating file timestamps, and using wildcards. The document also covers setting and understanding file permissions and using tools like ls for listing files.
This document provides an overview of basic file management tasks in Linux, including copying, moving, removing files and directories. It discusses commands like cp, mv, rm, mkdir, touch and their usage for tasks such as copying files and directories recursively, removing files and directories recursively, creating and removing directories. It also covers concepts like file permissions, wildcards and understanding the output of the ls command.
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
It gives an introduction to the architecture of Bareos, and how the components of Bareos interact. The configuration of Bareos will be discussed and the main Bareos features will be shown. As a practical part of the workshop the adaption of the preconfigured standard backup scheme to the attendees’ wishes will be developed.
Attendees are kindly asked to contribute configuration tasks that they want to have solved.
This document provides an overview of basic Linux file management commands like cp, mv, rm, mkdir and touch. It discusses using cp to copy files and directories, mv to move and rename files, rm to remove files and directories, mkdir to create directories and touch to update file timestamps. It also covers using find to search for files based on criteria like name, size, permissions and timestamps.
This document discusses Linux file systems and creating partitions and filesystems in Linux. It covers the following key points:
1. Linux supports various filesystems like ext2, ext3, xfs, and ReiserFS that can be created using mkfs. Swap spaces are created with mkswap.
2. Partitions and filesystems can be created using tools like fdisk, cfdisk, and gpart. Filesystem types include ext3, xfs, FAT, etc.
3. The Filesystem Hierarchy Standard defines the directory structure and recommended layout of files on Linux systems with directories like /bin, /etc, /home, /usr, /var, etc.
This document provides an overview of Debian package management tools and utilities. It discusses using dpkg to install, remove, get information on, and manage packages. It also covers using dselect which provides a character-based graphical interface to manage packages from various sources like CDROM, NFS, hard disk, FTP etc. Key tools covered include dpkg, dselect, apt-get, and utilities like dpkg-reconfigure, apt-cache.
The document provides information about designing hard disk layouts in Linux systems. It discusses partitioning schemes and the use of extended partitions to allow for more than 4 primary partitions. It also covers creating filesystems and swap spaces on partitions using tools like mkfs, mkswap, and mke2fs. Mount points are explained as directories where partitions can be mounted to make their contents accessible in the file system hierarchy.
Server is a machine configured to accept requests from clients and respond accordingly. Linux is commonly used for servers, with distributions like Ubuntu, Redhat, and Debian. Key principles of Linux include treating everything as a file, storing configuration data in text files, and using pipes to connect programs. Common server files include /etc/group, /etc/passwd, and /etc/shadow which contain user and group information.
This document provides an overview of basic file management tasks in Linux, including copying, moving, removing files and directories. It discusses commands like cp, mv, rm, mkdir, touch and their usage for tasks such as copying files and directories recursively, removing empty or non-empty directories, updating file timestamps, and using wildcards. The document also covers setting and understanding file permissions and using tools like ls for listing files.
This document provides an overview of basic file management tasks in Linux, including copying, moving, removing files and directories. It discusses commands like cp, mv, rm, mkdir, touch and their usage for tasks such as copying files and directories recursively, removing files and directories recursively, creating and removing directories. It also covers concepts like file permissions, wildcards and understanding the output of the ls command.
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
It gives an introduction to the architecture of Bareos, and how the components of Bareos interact. The configuration of Bareos will be discussed and the main Bareos features will be shown. As a practical part of the workshop the adaption of the preconfigured standard backup scheme to the attendees’ wishes will be developed.
Attendees are kindly asked to contribute configuration tasks that they want to have solved.
This document provides an overview of basic Linux file management commands like cp, mv, rm, mkdir and touch. It discusses using cp to copy files and directories, mv to move and rename files, rm to remove files and directories, mkdir to create directories and touch to update file timestamps. It also covers using find to search for files based on criteria like name, size, permissions and timestamps.
This document discusses Linux file systems and creating partitions and filesystems in Linux. It covers the following key points:
1. Linux supports various filesystems like ext2, ext3, xfs, and ReiserFS that can be created using mkfs. Swap spaces are created with mkswap.
2. Partitions and filesystems can be created using tools like fdisk, cfdisk, and gpart. Filesystem types include ext3, xfs, FAT, etc.
3. The Filesystem Hierarchy Standard defines the directory structure and recommended layout of files on Linux systems with directories like /bin, /etc, /home, /usr, /var, etc.
This document provides an overview of Debian package management tools and utilities. It discusses using dpkg to install, remove, get information on, and manage packages. It also covers using dselect which provides a character-based graphical interface to manage packages from various sources like CDROM, NFS, hard disk, FTP etc. Key tools covered include dpkg, dselect, apt-get, and utilities like dpkg-reconfigure, apt-cache.
The document provides information about designing hard disk layouts in Linux systems. It discusses partitioning schemes and the use of extended partitions to allow for more than 4 primary partitions. It also covers creating filesystems and swap spaces on partitions using tools like mkfs, mkswap, and mke2fs. Mount points are explained as directories where partitions can be mounted to make their contents accessible in the file system hierarchy.
Server is a machine configured to accept requests from clients and respond accordingly. Linux is commonly used for servers, with distributions like Ubuntu, Redhat, and Debian. Key principles of Linux include treating everything as a file, storing configuration data in text files, and using pipes to connect programs. Common server files include /etc/group, /etc/passwd, and /etc/shadow which contain user and group information.
This document provides information about scheduling jobs in UNIX/Linux systems. It discusses using the cron daemon to schedule jobs to run periodically based on time and date settings. It also covers using the at command to schedule single jobs to run once at a specific time. The crontab file format and common cron directories are described. It outlines how to list, delete, and manage scheduled jobs, and how user access to job scheduling is configured through cron access control files.
This document discusses Linux file systems and partitioning. It covers commands used to create partitions like fdisk and mkfs, as well as filesystem types like ext3. It also discusses creating and managing swap spaces. The key points are that Linux uses mkfs to format partitions, fdisk to create partitions, and mkswap to initialize swap spaces which are then activated with swapon.
Linux is an open-source operating system that can run on various hardware. The document discusses various Linux commands and concepts related to directories, files, permissions, users, groups, text editors like vi and vim, process management, disk partitioning and more. It also covers Linux installation, package management, shell scripting and configuring network and services like SSH, web servers and more. Exercises are included to help understand concepts like mount points, journaling and file attributes.
This document provides information about creating partitions and filesystems in Linux. It discusses various Linux filesystem types like ext2, ext3, xfs, reiserfs v3, and vfat. It covers the commands and tools used to create partitions (fdisk, mkfs), filesystems (mkfs), and swap spaces (mkswap, swapon). It also discusses viewing filesystem information, mounting filesystems, and the Filesystem Hierarchy Standard for directory structure in Linux.
This document discusses Linux file permissions and how they work. It explains the components of a Linux file permission listing using the ls -l command as an example. It then covers the meaning of the different parts of the file permission listing like file type, permission modes for owner/group/other, link count, ownership, size, time stamp, and file name. The document also discusses how to determine permissions, change permission modes using chmod, change ownership with chown/chgrp, special permissions like SUID/SGID, and default file permissions set with umask.
When you are new to Linux in 2020, go for the latest Mint or Fedora. If you only want to practice the Linux command line then install one Debian server and/or one CentOS server
(without graphical interface).
Due to limitations in the supported zip file format, four files had to be renamed from their original names to new names for zipping. The original and new names of each file are listed.
This document discusses Linux file management and file system hierarchy. It covers topics such as directory trees with the root directory at the top; absolute and relative file paths; commands for creating, deleting, copying, moving and linking files; and commands for finding files and determining file types and command locations.
This document provides information about core dumps in Linux systems. It discusses how to enable and disable core dumps, customize core dump settings like file name patterns and memory segments dumped. It also provides examples of generating, analyzing and validating core dumps.
The document discusses various Linux commands for shells, files, processes, networking and more. It provides descriptions and examples of commands like bash, ls, cat, grep, cut and vi for navigating directories, viewing files, searching contents and extracting fields. It also covers concepts like pipes, redirection, wildcards and regular expressions.
This document provides an overview of basic Unix commands including ls, cd, pwd, mkdir, rm, rmdir, cp, find, touch, echo, cat, who, and du. It explains what each command is used for and provides examples of common usages. The document serves as a beginner's guide to learning Unix commands.
This document discusses the Red Hat Package Manager (RPM), including what it can do, who uses it, terminology, database location, common operations like install, uninstall, query, and upgrade, and various options for those operations. RPM is used to install, manage, and uninstall software packages on Red Hat, Fedora, CentOS and other Linux distributions. It allows adding, removing, upgrading and verifying packages and their dependencies.
The document discusses designing hard disk layouts in Linux systems. It describes partitioning schemes, including extended and logical partitions. It explains how to create filesystems and swap spaces using tools like fdisk, mkfs, mkswap. It also covers formatting disks or partitions, and the various Linux filesystem types and standards like FHS.
P2Cinfotech is one of the leading, Online IT Training facilities and Job Consultant, spread all over the world. We have successfully conducted online classes on various Software Technologies that are currently in Demand. To name a few, we provide quality online training for QA, QTP, Manual Testing, HP LoadRunner, BA, Java Technologies.
Unique Features of P2Cinfotech:
1. All online software Training Batches will Be handled by Real time working Professionals only.
2. Live online training like Real time face to face, Instructor ? student interaction.
3. Good online training virtual class room environment.
4. Special Exercises and Assignments to make you self-confident on your course subject.
5. Interactive Sessions to update students with latest Developments on the particular course.
6. Flexible Batch Timings and proper timetable.
7. Affordable, decent and Flexible fee structure.
8. Extended Technical assistance even after completion of the course.
9. 100% Job Assistance and Guidance.
Courses What we cover:
Quality Assurance
Business Analsis
QTp
JAVA
Apps Devlepoment Training
Register for Free DEMO:
www.p2cinfotech.com p2cinfotech@gmail.com +1-732-546-3607 (USA)
Ben-Gurion University of the Negev was established in 1969 with the aim of developing the Negev desert region, which comprises over 60% of Israel's land area. It was inspired by Israel's first Prime Minister David Ben-Gurion's vision that the country's future lay in this region. Today, Ben-Gurion University has campuses in Beer-Sheva as well as in Eilat and Sede Boqer, where Ben-Gurion lived in his final years and is buried.
This document discusses backups in Linux systems. It covers various tools for backing up files and directories like tar, rsync, burning software and tape drives. It explains how to create, view, extract tar archives and compress them using gzip or bzip2 to reduce file sizes. Key areas covered include what to backup, local and remote backup locations, and how backups are different for small and large systems. The document also provides a brief overview of zip files and their role in backups.
The document discusses designing hard disk layouts in Linux systems. It covers key areas like allocating filesystems and swap space to separate partitions, tailoring the design to the intended system use, and ensuring boot partition requirements are met. It provides details on partitioning schemes, creating and formatting partitions and filesystems, swap space creation, and the Linux disk naming convention. The goal is to help administrators properly layout disks and partitions for Linux installation and package management.
This document provides an overview of Hadoop HDFS/MapReduce architecture, hardware requirements, installation and configuration process, monitoring options, and key components like the Namenode. It discusses configuring the Namenode, JobTracker, DataNodes, and TaskTrackers. Hardware requirements for the NameNode/JobTracker and DataNode/TaskTracker are specified. Installation can be done via tar file download or using a prebuilt rpm. Configuration involves editing XML configuration files and starting required services. Monitoring can be done via web scraping, Ganglia, or Cacti. The Namenode writes edits to RAM and write-ahead log, with checkpoints to the filesystem. High availability is experimental in YARN and Hadoop
This document discusses Linux backup strategies and tools. It covers topics like using tar to backup directories and flatten them into single files, compressing backups with tools like gzip and bzip2, and combining compression with tar to create compressed tarballs. Specific utilities are explained, like tar for archiving, rsync for synchronization, and various compression algorithms like bzip2, gzip, and zip. Factors that influence backup strategies like budget, time, and needs are also mentioned.
The document discusses loop control structures in C++. It explains the for, while, and do-while loops and provides examples. It also covers break, continue, return, and goto statements used to control program flow in loops.
1) Virtual memory simplifies address translation by creating a virtual memory space that contains the operating system and application programs, even though it does not physically exist.
2) Virtual memory is divided into two components - the first occupies real memory and contains the operating system and page pool, while the second occupies external storage and holds application programs, with pages swapped between real memory and external storage.
3) Address translation under virtual memory uses segment tables and page tables to map virtual addresses to physical addresses.
This document provides information about scheduling jobs in UNIX/Linux systems. It discusses using the cron daemon to schedule jobs to run periodically based on time and date settings. It also covers using the at command to schedule single jobs to run once at a specific time. The crontab file format and common cron directories are described. It outlines how to list, delete, and manage scheduled jobs, and how user access to job scheduling is configured through cron access control files.
This document discusses Linux file systems and partitioning. It covers commands used to create partitions like fdisk and mkfs, as well as filesystem types like ext3. It also discusses creating and managing swap spaces. The key points are that Linux uses mkfs to format partitions, fdisk to create partitions, and mkswap to initialize swap spaces which are then activated with swapon.
Linux is an open-source operating system that can run on various hardware. The document discusses various Linux commands and concepts related to directories, files, permissions, users, groups, text editors like vi and vim, process management, disk partitioning and more. It also covers Linux installation, package management, shell scripting and configuring network and services like SSH, web servers and more. Exercises are included to help understand concepts like mount points, journaling and file attributes.
This document provides information about creating partitions and filesystems in Linux. It discusses various Linux filesystem types like ext2, ext3, xfs, reiserfs v3, and vfat. It covers the commands and tools used to create partitions (fdisk, mkfs), filesystems (mkfs), and swap spaces (mkswap, swapon). It also discusses viewing filesystem information, mounting filesystems, and the Filesystem Hierarchy Standard for directory structure in Linux.
This document discusses Linux file permissions and how they work. It explains the components of a Linux file permission listing using the ls -l command as an example. It then covers the meaning of the different parts of the file permission listing like file type, permission modes for owner/group/other, link count, ownership, size, time stamp, and file name. The document also discusses how to determine permissions, change permission modes using chmod, change ownership with chown/chgrp, special permissions like SUID/SGID, and default file permissions set with umask.
When you are new to Linux in 2020, go for the latest Mint or Fedora. If you only want to practice the Linux command line then install one Debian server and/or one CentOS server
(without graphical interface).
Due to limitations in the supported zip file format, four files had to be renamed from their original names to new names for zipping. The original and new names of each file are listed.
This document discusses Linux file management and file system hierarchy. It covers topics such as directory trees with the root directory at the top; absolute and relative file paths; commands for creating, deleting, copying, moving and linking files; and commands for finding files and determining file types and command locations.
This document provides information about core dumps in Linux systems. It discusses how to enable and disable core dumps, customize core dump settings like file name patterns and memory segments dumped. It also provides examples of generating, analyzing and validating core dumps.
The document discusses various Linux commands for shells, files, processes, networking and more. It provides descriptions and examples of commands like bash, ls, cat, grep, cut and vi for navigating directories, viewing files, searching contents and extracting fields. It also covers concepts like pipes, redirection, wildcards and regular expressions.
This document provides an overview of basic Unix commands including ls, cd, pwd, mkdir, rm, rmdir, cp, find, touch, echo, cat, who, and du. It explains what each command is used for and provides examples of common usages. The document serves as a beginner's guide to learning Unix commands.
This document discusses the Red Hat Package Manager (RPM), including what it can do, who uses it, terminology, database location, common operations like install, uninstall, query, and upgrade, and various options for those operations. RPM is used to install, manage, and uninstall software packages on Red Hat, Fedora, CentOS and other Linux distributions. It allows adding, removing, upgrading and verifying packages and their dependencies.
The document discusses designing hard disk layouts in Linux systems. It describes partitioning schemes, including extended and logical partitions. It explains how to create filesystems and swap spaces using tools like fdisk, mkfs, mkswap. It also covers formatting disks or partitions, and the various Linux filesystem types and standards like FHS.
P2Cinfotech is one of the leading, Online IT Training facilities and Job Consultant, spread all over the world. We have successfully conducted online classes on various Software Technologies that are currently in Demand. To name a few, we provide quality online training for QA, QTP, Manual Testing, HP LoadRunner, BA, Java Technologies.
Unique Features of P2Cinfotech:
1. All online software Training Batches will Be handled by Real time working Professionals only.
2. Live online training like Real time face to face, Instructor ? student interaction.
3. Good online training virtual class room environment.
4. Special Exercises and Assignments to make you self-confident on your course subject.
5. Interactive Sessions to update students with latest Developments on the particular course.
6. Flexible Batch Timings and proper timetable.
7. Affordable, decent and Flexible fee structure.
8. Extended Technical assistance even after completion of the course.
9. 100% Job Assistance and Guidance.
Courses What we cover:
Quality Assurance
Business Analsis
QTp
JAVA
Apps Devlepoment Training
Register for Free DEMO:
www.p2cinfotech.com p2cinfotech@gmail.com +1-732-546-3607 (USA)
Ben-Gurion University of the Negev was established in 1969 with the aim of developing the Negev desert region, which comprises over 60% of Israel's land area. It was inspired by Israel's first Prime Minister David Ben-Gurion's vision that the country's future lay in this region. Today, Ben-Gurion University has campuses in Beer-Sheva as well as in Eilat and Sede Boqer, where Ben-Gurion lived in his final years and is buried.
This document discusses backups in Linux systems. It covers various tools for backing up files and directories like tar, rsync, burning software and tape drives. It explains how to create, view, extract tar archives and compress them using gzip or bzip2 to reduce file sizes. Key areas covered include what to backup, local and remote backup locations, and how backups are different for small and large systems. The document also provides a brief overview of zip files and their role in backups.
The document discusses designing hard disk layouts in Linux systems. It covers key areas like allocating filesystems and swap space to separate partitions, tailoring the design to the intended system use, and ensuring boot partition requirements are met. It provides details on partitioning schemes, creating and formatting partitions and filesystems, swap space creation, and the Linux disk naming convention. The goal is to help administrators properly layout disks and partitions for Linux installation and package management.
This document provides an overview of Hadoop HDFS/MapReduce architecture, hardware requirements, installation and configuration process, monitoring options, and key components like the Namenode. It discusses configuring the Namenode, JobTracker, DataNodes, and TaskTrackers. Hardware requirements for the NameNode/JobTracker and DataNode/TaskTracker are specified. Installation can be done via tar file download or using a prebuilt rpm. Configuration involves editing XML configuration files and starting required services. Monitoring can be done via web scraping, Ganglia, or Cacti. The Namenode writes edits to RAM and write-ahead log, with checkpoints to the filesystem. High availability is experimental in YARN and Hadoop
This document discusses Linux backup strategies and tools. It covers topics like using tar to backup directories and flatten them into single files, compressing backups with tools like gzip and bzip2, and combining compression with tar to create compressed tarballs. Specific utilities are explained, like tar for archiving, rsync for synchronization, and various compression algorithms like bzip2, gzip, and zip. Factors that influence backup strategies like budget, time, and needs are also mentioned.
The document discusses loop control structures in C++. It explains the for, while, and do-while loops and provides examples. It also covers break, continue, return, and goto statements used to control program flow in loops.
1) Virtual memory simplifies address translation by creating a virtual memory space that contains the operating system and application programs, even though it does not physically exist.
2) Virtual memory is divided into two components - the first occupies real memory and contains the operating system and page pool, while the second occupies external storage and holds application programs, with pages swapped between real memory and external storage.
3) Address translation under virtual memory uses segment tables and page tables to map virtual addresses to physical addresses.
This document contains information about various components and responsibilities of an operating system. It discusses user interfaces like graphical user interfaces (GUIs) and command line interfaces (CLIs). It also describes basic management functions performed by the operating system, including file management, task management, and memory management. File management involves creating, defining, saving, and deleting files. Task management includes multi-tasking, multi-programming, and multi-processing capabilities. Memory management refers to how the operating system allocates and manages RAM and uses virtual memory.
This document discusses techniques for file management including free space management using free lists and bitmaps, file access control using access control matrices and user classes, and data loss prevention using physical and logical backups. Free space management aims to reuse deleted file space. Access control restricts user access to sensitive files. Backups store redundant data copies to ensure continued data availability.
This document summarizes file management techniques including file organization, structure, and allocation methods. It discusses sequential, direct, and indexed sequential file organization and their characteristics. It also describes three methods of file allocation: contiguous, linked list using blocks, and linked list using an index. The linked list methods allow non-contiguous allocation by linking file extents through pointers.
The document discusses various aspects of file systems and storage management in operating systems. It covers topics like file attributes, operations, structures, access methods, directory structures, file sharing, consistency semantics, and protection. File attributes include the file name, size, type and protection attributes. Common file operations are creating, reading, writing and deleting files. Files can have sequential, direct or indexed access methods. Directory structures can be single-level, two-level, tree-structured or graph-based. File sharing requires consistency models like Unix, session or immutable semantics. Protection controls access via access matrices, access control lists or capability lists.
The document discusses different allocation methods for managing disk space for files in an operating system. It describes contiguous, linked, and indexed allocation. Contiguous allocation allocates files in contiguous blocks, simplifying access but wasting space. Linked allocation uses pointers to non-contiguous blocks, avoiding fragmentation. Indexed allocation uses an index block to store block addresses, requiring multiple accesses to retrieve a block.
The document discusses file management and directories. It describes block management strategies like contiguous allocation and linked lists. It discusses reading and writing byte streams which involves packing and unpacking blocks of data. It also covers supporting high-level file abstractions like structured sequential files and indexed sequential files. Finally, it discusses directories and their structures like hierarchical and graph-based organizations.
Unix is an operating system that provides an interface between users and hardware. It manages storage, I/O devices, and allows for multi-tasking and multi-user access. The Unix operating system originated in 1969 and has evolved over time with contributions from various universities and companies. It uses a kernel to manage interactions between users, applications, and hardware. The shell is the interface that users interact with to run commands and programs on the system.
101 4.5 manage file permissions and ownershipAcácio Oliveira
This document discusses managing file permissions and ownership in Linux systems. It covers commands such as chmod, umask, chown, and chgrp which can change access permissions and ownership of files and directories. The document also discusses concepts like users, groups, and security levels and how they relate to file permissions and access.
This document provides an overview of basic shell commands in Linux/Unix systems. It lists 42 common commands like cat, cd, cp, grep, gzip, kill, ls, man, mv, ps, pwd and tar along with brief descriptions of their usage. The commands allow users to navigate directories, view, edit and manage files, compress and archive files, view system processes and documentation, and more.
Solaris is a version of Unix developed by Sun Microsystems based on System V Unix. It has been widely used in enterprise environments. Learning Solaris involves understanding its design philosophy and basic commands before moving to more advanced topics. Some key commands include ps, df, and uname to check processes, disk space, and the OS version. Directories, files, and permissions can be managed using commands like mkdir, chown, chmod while groups and users are configured with useradd, groupadd, and usermod. Services are managed through SMF tools like svcadm rather than init scripts.
This document provides an introduction to the bash shell and basic Linux commands. It discusses what a shell is, differences between common shells like bash, Bourne, and C shells. It also covers basic UNIX commands like ls, touch, mkdir, rm, and their arguments and usage. The document teaches how to manipulate files and directories, check command return codes, and get command help documentation.
101 4.2 maintain the integrity of filesystemsAcácio Oliveira
The document discusses maintaining the integrity of Linux filesystems. It describes tools like fsck, e2fsck, and tune2fs for checking and repairing filesystems, as well as df, du, and debugfs for monitoring filesystem usage and exploring ext filesystem internals. The document provides examples of using these tools to check filesystems by label, UUID, or device node; view free space and inode information; and attempt undeletion of files.
101 4.2 maintain the integrity of filesystemsAcácio Oliveira
The document discusses maintaining the integrity of Linux filesystems. It describes tools like fsck, e2fsck, and tune2fs for checking filesystems, df and du for monitoring free space and disk usage. It provides examples of using these tools to check filesystems by label, UUID, or device name, and monitor mounted filesystem types and inodes. The document emphasizes that filesystems should be unmounted before running fsck to avoid potential damage.
Docker and friends at Linux Days 2014 in Praguetomasbart
Docker allows deploying applications easily across various environments by packaging them along with their dependencies into standardized units called containers. It provides isolation and security while allowing higher density and lower overhead than virtual machines. Core OS and Mesos both integrate with Docker to deploy containers on clusters of machines for scalability and high availability.
This document provides an introduction to basic Linux commands and file system concepts. It lists common commands like ls, pwd, cat, and their usage. It covers directory structure, permissions, and how to set user and group permissions. It also shows how to create symbolic links, delete and create files and directories. Finally, it demonstrates how to set the temporary and permanent hostname and create a basic client-server network with two machines.
This document discusses maintaining the integrity of Linux filesystems. It covers tools like fsck, e2fsck, tune2fs, and debugfs for checking and repairing filesystems. It also covers commands like df, du, and dumpe2fs for monitoring filesystem usage and properties. The document provides examples of using these tools to check filesystem integrity, find free space, view inode information, and explore filesystem internals.
This document provides an overview of a talk about Docker. It introduces Docker features like images, containers, and the workflow. It describes how Docker uses namespaces and control groups for isolation. It compares Docker to virtual machines and explains why Docker is popular. The document then demonstrates Docker through a tutorial of pulling an image, running a container, and viewing container logs. It also discusses the Dockerfile for automating builds.
This document provides an overview of basic Linux commands and concepts for beginners. It covers topics such as opening the terminal, changing directories, listing and manipulating files and folders, searching for files, managing processes, installing packages, setting environment variables, and compressing files. The document is intended to help new Linux users learn the basics of how Linux is organized and how to navigate and perform tasks on the command line interface.
This document provides 50 examples of common Linux/Unix commands along with brief explanations and usage examples for each command. Some of the commands highlighted include tar, grep, find, ssh, sed, awk, vim, diff, sort, and ls. The examples cover a wide range of tasks from compressing/extracting files to searching/editing text to managing processes and permissions.
This document provides 50 examples of common Linux/Unix commands along with brief explanations and usage examples for each command. Some of the commands highlighted include tar, grep, find, ssh, sed, awk, vim, diff, sort, export, xargs, ls, pwd, cd, gzip, bzip2, unzip, shutdown, ftp, crontab, service, ps, top, df, kill, rm, cp, mv, cat, mount, chmod, chown, passwd, mkdir, ifconfig, and uname. The document is intended to give readers a quick start on frequently used commands.
This document discusses Linux text stream filters and provides examples of common Unix commands used to process and modify text streams. These commands include cat, head, tail, cut, and split. Cat prints the contents of files, head prints the first few lines, tail prints the last few lines, cut extracts parts of each line, and split divides files into smaller parts. The document also covers input/output redirection and how it can be used with text stream filters.
This document discusses Linux text stream filters and provides examples of common Unix commands used to process and modify text streams. These commands include cat, head, tail, cut, and split. Cat prints the contents of files, head prints the first few lines, tail prints the last few lines, cut extracts parts of each line, and split divides files into smaller parts. The document also covers input/output redirection and how it can be used with filters to modify command output and send it to files.
This document provides an introduction to Linux command basics and top 50 commands. It discusses important commands like pwd, ls, cd, mkdir, rmdir, lsblk, mount, df, ps, kill, touch, cat, head, cp, mv, comm, ln and more. It also covers users and groups, file ownership and permissions. Lab exercises are included to practice using commands like mkdir, chmod, chown and displaying directory contents. Finally, it discusses useful filter commands like grep, uniq and sort as well as the text manipulation command awk.
This document provides 50 examples of common Linux/Unix commands organized by command name. It begins with tar, grep, find, ssh, sed, awk, vim, diff, sort, and export examples. The document is intended as a quick reference for users to learn practical uses of fundamental Linux commands.
How to make debian package from scratch (linux)Thierry Gayet
- The document discusses two methods for creating Debian packages: using dpkg-deb or dpkg-buildpackage.
- It provides step-by-step instructions for creating the package directory structure, metadata files, building and installing the package, and verifying installation.
- An alternative method using dh_make is also presented, which simplifies the process by automatically generating basic packaging files and directories.
This document discusses the differences between vulnerability scanning and penetration testing. Vulnerability scanning uses automated tools to identify vulnerabilities while penetration testing involves manual techniques to simulate real-world attacks. The document also notes that vulnerability scanning and penetration testing have different levels of testing and are covered in Topic 24 of the Security+ Study Guide for 10 minutes.
This document discusses application security controls and techniques. It covers secure coding concepts, other security controls and techniques, and provides a 10 minute overview of application security controls and techniques as outlined in the CompTIA Security+ Study Guide. The document was prepared by Acácio Oliveira in 2021.
This document discusses types of application attacks and is divided into two sections. The first section defines application attacks and covers common types. The second section explains the goals of application attacks and how they can divulge weaknesses in some applications. The entire document is part of a Security+ study guide and covers types of application attacks over a period of 20 minutes.
Security+ Lesson 01 Topic 19 - Summary of Social Engineering Attacks.pptxAcácio Oliveira
This document provides a summary of social engineering attacks. Social engineering attacks are effective because they target human nature. There are different types of social engineering attacks, such as pretexting where attackers pretend to be someone trustworthy to get victims to reveal sensitive information. The document is part of a study guide for the CompTIA Security+ certification and focuses on summarizing social engineering attacks.
This document discusses security assessment tools, including types of security assessments and specific assessment tools. It is part of a CompTIA Security+ study guide prepared in 2021, focusing on an overview of security assessment tools in about 10 minutes.
This document provides a 10 minute summary of wireless attacks. It discusses common types of wireless attacks and attacks on wireless encryption. The document is part of a study guide for the CompTIA Security+ certification and is prepared by Acácio Oliveira in 2021.
This document discusses security enhancement techniques for network security. It covers the difference between detection controls and prevention controls. The topic is part of the Security+ Study Guide Concepts prepared by Acácio Oliveira in 2021 and focuses on network security enhancement techniques for 10 minutes as part of the study guide's Topic 22 on security enhancement techniques.
This document discusses risk management best practices for the CompTIA Security+ exam, covering business continuity concepts, fault tolerance, and is part of a study guide prepared by Acácio Oliveira in 2021 with an estimated reading time of 10 minutes.
This document discusses physical security and environmental controls, which are important concepts for the CompTIA Security+ exam. It covers control types for physical security and the environment, such as restricting physical access and monitoring for issues like fires, floods or power outages. The document is from a study guide prepared in 2021 to help study for the Security+ exam.
This document discusses disaster recovery concepts for the CompTIA Security+ exam, including maintaining disaster recovery sites, performing regular data backups, and ensuring the ability to recover data and systems in the event of a disaster within 10 minutes of scheduled content.
This document discusses wireless security considerations for the CompTIA Security+ exam. It covers the unique security challenges of wireless networks compared to wired networks. Specific topics include security for wireless networks and an index of study guide topics related to wireless security considerations with an allocated time of 10 minutes.
Security+ Lesson 01 Topic 04 - Secure Network Design Elements and Components....Acácio Oliveira
This document discusses secure network design elements and components. It covers defense in depth as a strategy as well as elements and components of network design such as firewalls, intrusion detection systems, and virtual private networks. The document is from Acácio Oliveira's 2021 CompTIA Security+ Study Guide and focuses on secure network design for 10 minutes.
This document provides a 3-topic outline for a CompTIA Security+ study guide prepared by Acácio Oliveira in 2021. Topic 1 discusses secure network administration concepts like spam filters and network devices. Topic 2 is titled "Secure Network Administration Concepts" and is allocated 10 minutes of study time. The document also includes a topics index for the study guide.
This document provides an overview of the concepts to be covered in Lesson 01 on introduction to network devices from the CompTIA Security+ Study Guide. The lesson will be divided into three parts covering OSI models, basic network devices and layer security concepts in part 1; network devices in part 2; and spam filters and additional network devices in part 3, for a total time of 30 minutes.
Security+ Lesson 01 Topic 08 - Integrating Data and Systems with Third Partie...Acácio Oliveira
This document discusses integrating data and systems with third parties. It examines evaluating risks with integration and considerations for integration. The document is part of a Security+ study guide prepared by Acácio Oliveira in 2021 on the topic of integrating data and systems with third parties.
Security+ Lesson 01 Topic 07 - Risk Related Concepts.pptxAcácio Oliveira
This document outlines concepts related to risk for the CompTIA Security+ certification. It covers three topics on risk-related concepts over 30 minutes: policies for controlling and reducing risk, qualitative versus quantitative risk assessments, and treating risks such as those involving cloud computing and virtualization. The document was prepared by Acácio Oliveira for Security+ study.
Security+ Lesson 01 Topic 05 - Common Network Protocols.pptxAcácio Oliveira
This document provides an overview of the topics to be covered in a 30-minute Common Network Protocols lesson for CompTIA Security+ certification preparation. The lesson will be divided into three parts covering IPv4 and IPv6, network storage protocols, the difference between ports and protocols, common protocols, and end-to-end security.
This document provides a 10 minute overview of incident response concepts for Security+ certification preparation. It discusses first responder responsibilities during security incidents and outlines incident response procedures and concepts covered in the Security+ study guide. The document is titled "Incident Response Concepts" and was prepared by Acácio Oliveira in 2021 as part of a Security+ study guide.
Security+ Lesson 01 Topic 12 - Security Related Awareness and Training.pptxAcácio Oliveira
This document discusses security related awareness and training. It covers the security policy, security awareness, and is part of a study guide for the CompTIA Security+ exam. The topic is allocated 10 minutes as part of the study guide index.
Security+ Lesson 01 Topic 17 - Types of Malware.pptxAcácio Oliveira
Malware is defined and common types are discussed including viruses, worms, Trojans, ransomware, and spyware. The document provides an overview of different types of malware for 10 minutes as part of a CompTIA Security+ study guide on the topic of types of malware.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
4. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
ls - lists the contents of directories
Files, directories and ls
4
In output of ls, the meaning of columns left to right are:
•file type and permissions (d – directory, l – symbolic link, r – read, w – write, x –executable or searchable)
•number of links to the file
•user name
•group name
•size (in bytes)
•date
•time
•file name
After file name may print additional info. “/” – it's a directory, “*”– it's executable, “>somewhere” it‘s symbolic link
bar@foo:~> cd /etc/ppp/
bar@foo:/etc/ppp> ls -la
total 68
drwxr-x--- 5 root dialout 4096 2003-05-12 12:36 ./
drwxr-xr-x 53 root root 8192 2003-06-04 10:51 ../
lrwxrwxrwx 1 root root 5 2003-03-03 11:09 ip-down -> ipup*
drwxr-xr-x 2 root root 4096 2002-09-10 19:40 ip-down.d/
-rwxr-xr-x 1 root root 9038 2002-09-10 19:40 ip-up*
drwxr-xr-x 2 root root 4096 2002-09-10 19:40 ip-up.d/
-rw-r--r-- 1 root root 8121 2002-09-09 22:32 options
-rw------- 1 root root 1219 2002-09-09 22:32 pap-secrets
#type+perm link user group size date time name
Ex:
5. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
special characters and strings that can take the place of any characters, or specific characters.
File globbing (wildcards)
5
foo:/etc $ ls -l host*
-rw-r--r-- 1 root root 17 Jul 23 2000 host.conf
-rw-rw-r-- 2 root root 291 Mar 31 15:03 hosts
-rw-r--r-- 1 root root 201 Feb 13 21:05 hosts.allow
-rw-rw-r-- 1 root root 196 Dec 9 15:06 hosts.bak
-rw-r--r-- 1 root root 357 Jan 23 19:39 hosts.deny
foo:/etc $ ls [A-Z]* -lad
drwxr-xr-x 3 root root 4096 Oct 14 11:07 CORBA
-rw-r--r-- 1 root root 2434 Sep 2 2002 DIR_COLORS
-rw-r--r-- 1 root root 2434 Sep 2 2002 DIR_COLORS.xterm
-rw-r--r-- 1 root root 92336 Jun 23 2002 Muttrc
drwxr-xr-x 18 root root 4096 Feb 24 19:54 X11
Ex:
shell expands the following special characters just in case they are part of a file name.
Filename expansion is only performed if the special characters are in quote
~ /home/user or /root (the user's home directory) echo ~/.bashrc
* Any sequence of characters (or no characters) echo /etc/*ost*
? Any one character echo /etc/hos?
[abc] Any of a, b, or c echo /etc/host[s.]
{abc,def,ghi} Any of abc, def or ghi echo /etc/hosts.{allow,deny}
6. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
mkdir – make directory
Directories and files
6
[jack@foo tmp]$ mkdir dir1
[jack@foo tmp]$ mkdir dir2 dir3
[jack@foo tmp]$ ls -la
total 20
drwxrwxr-x 5 jack jack 4096 Apr 8 10:00 .
drwxr-xr-x 12 jack users 4096 Apr 8 10:00 ..
drwxrwxr-x 2 jack jack 4096 Apr 8 10:00 dir1
drwxrwxr-x 2 jack jack 4096 Apr 8 10:00 dir2
drwxrwxr-x 2 jack jack 4096 Apr 8 10:00 dir3
[jack@foo tmp]$ mkdir parent/sub/something
mkdir: cannot create directory `parent/sub/something': No such file
or directory
[jack@foo tmp]$ mkdir -p parent/sub/something
[jack@foo tmp]$ find
.
./dir1
./dir2
./dir3
./parent
./parent/sub
./parent/sub/something
Ex:
To make directory, use mkdir. By default, parent directory must exist, but with the -p switch mkdir will
also create the required parent directories.
9. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
touch – update a file's time stamp
Directories and files
9
[jack@foo tmp]$ mkdir dir4
[jack@foo tmp]$ cd dir4
[jack@foo dir4]$ ls -l
total 0
[jack@foo dir4]$ touch newfile
[jack@foo dir4]$ ls -l
total 0
-rw-rw-r-- 1 jack jack 0 Apr 8 10:28 newfile
Ex:
originally written to update the time stamp of files. However, due to programming error, touch would create an empty file if it did not
exist. By the time this error was discovered, people were using touch to create empty files, and this is now an accepted use of touch.
[jack@foo dir4]$ touch newfile
[jack@foo dir4]$ ls -l
total 0
-rw-rw-r-- 1 jack jack 0 Apr 8 10:29 newfile
The second time we run touch it updates the file time
11. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
rm – remove files
Directories and files
11
[hack@foo dir4]$ cd ..
[hack@foo tmp]$ ls dir4/
file-one-a file-one-b file-two-a file-two-b newfile
[hack@foo tmp]$ rm dir4/file*a
[hack@foo tmp]$ ls dir4/
file-one-b file-two-b newfile
Ex:
[hack@foo tmp]$ rm -ri dir4
rm: descend into directory `dir4'? y
rm: remove regular empty file `dir4/newfile'? y
rm: remove regular empty file `dir4/file-one-b'? y
rm: remove regular empty file `dir4/file-two-b'? y
rm: remove directory `dir4'? y
rm -r can be used to remove directories and the files contained in them.
rm -i to specify interactive behaviour (confirmation before removal).
Ex:
rm -rf is used to remove directories and their files without prompting
[hack@foo tmp]$ rm -rf dir4
Ex:
12. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
cp – copy files
Copying and moving
12
[[jack@foo jack]$ ls
[jack@foo jack]$ cp /etc/passwd .
[jack@foo jack]$ ls
passwd
copying a file (/etc/passwd) to a directory (.)Ex:
cp -v is verbose about the progress of the copying process.
[jack@foo jack]$ mkdir copybin
[jack@foo jack]$ cp /bin/* ./copybin
[jack@foo jack]$ cp -v /bin/* ./copybin
`/bin/arch' -> `./copybin/arch'
`/bin/ash' -> `./copybin/ash'
`/bin/ash.static' -> `./copybin/ash.static'
`/bin/aumix-minimal' -> `./copybin/aumix-minimal'
`/bin/awk' -> `./copybin/awk'
Ex:
cp has 2 modes of usage:
1.Copy one or more files to a directory – cp file1 file2 file3 /home/user/directory/
2.Copy a file and to a new file name – cp file1 file13
If there are more than 2 parameters given to cp, then last parameter is always the destination directory.
13. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
mv – move or rename files
Copying and moving
13
[jack@foo jack]$ mkdir fred
[jack@foo jack]$ mv -v fred joe
`fred' -> `joe'
[jack@foo jack]$ cp -v /etc/passwd joe/
`/etc/passwd' -> `joe/passwd'
[jack@foo jack]$ mv -v joe/passwd .
`joe/passwd' -> `./passwd'
use -v so that mv is verbose.Ex:
mv is the command used for renaming files
[jack@foo jack]$ mv -v passwd ichangedthenamehaha
`passwd' -> `ichangedthenamehaha'
[jack@foo jack]$ cp /etc/services .
Ex:
mv (like cp) has 2 modes of operation:
1.Move one or more files to a directory – mv file1 file2 file3 /home/user/directory/
2.Move a file and to a new file name (rename) – mv file1 file13
If there are more than 2 parameters given to mv, then last parameter is always the destination
directory.
14. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
find – able to find files
find
14
[jack@tonto jack]$ find /usr/share/doc/unzip-5.50/ /etc/skel/
/usr/share/doc/unzip-5.50/
/usr/share/doc/unzip-5.50/BUGS
/usr/share/doc/unzip-5.50/INSTALL
/usr/share/doc/unzip-5.50/LICENSE
/usr/share/doc/unzip-5.50/README
/etc/skel/
/etc/skel/.kde
/etc/skel/.kde/Autostart
/etc/skel/.kde/Autostart/Autorun.desktop
/etc/skel/.kde/Autostart/.directory
/etc/skel/.bash_logout
/etc/skel/.bash_profile
.. // ..
Ex:
Conditions to find files:
• have names that contain certain text or match a given pattern
• are links to specific files
• were last used during a given period of time
• are within a given range of size
• are of a specified type (normal file, directory, symbolic link, etc.)
• are owned by a certain user or group or have specified permissions.
Simplest way of using is to specify a list of directories for find to search. By default find prints the name of each file it finds.
15. CoreLinuxforRedHatandFedoralearningunderGNUFreeDocumentationLicense-Copyleft(c)AcácioOliveira2012
Everyoneispermittedtocopyanddistributeverbatimcopiesofthislicensedocument,changingisallowed
basic file management
find is usually used to search based on particular criteria:
find
15
Test Meaning Example
-name » find based on the file's name find /bin /usr/bin -name 'ch*‘
-iname » case insensitive file name find /etc -iname '*pass*‘
-ctime » find based on the time that the file was find /home -ctime +365
-mtime created, or last modified, or last accessed find /var/log -mtime -l
-atime find /tmp -atime +30
-group » find based on the group or user of the file find /tmp -user `whoami`
-user
-uid find /tmp -uid `id -u`
-gid
-perm » find based on the file permissions find /usr/bin -perm -4001
(exactly, or +including, or all) find /etc -type f -perm +111
-size » find files of a particular size find / -size +5000k
-type » find files, director., pipes, sockets, links find /dev /var -type p