Sles file server


















The idmapd daemon is only required if Kerberos authentication is used, or if clients cannot work with numeric user names. Linux clients can work with numeric user names since Linux kernel 2. The idmapd daemon does the name-to-ID mapping for NFSv4 requests to the server and replies to the client. If required, idmapd needs to run on the NFSv4 server. Name-to-ID mapping on the client will be done by nfsidmap provided by the package nfs-client. Make sure that there is a uniform way in which user names and IDs UIDs are assigned to users across machines that might probably be sharing file systems using NFS.

If you are not sure, leave the domain as localdomain in the server and client files. A sample configuration file looks like the following:.

To start the idmapd daemon, run systemctl start nfs-idmapd. For more information, see the man pages of idmapd and idmapd. You must have a working Kerberos server to use this feature. YaST does not set up the server but only uses the provided functionality.

To use Kerberos authentication in addition to the YaST configuration, complete at least the following steps before running the NFS configuration:.

Make sure that both the server and the client are in the same Kerberos domain. Start the gssd service on the client with systemctl start rpc-gssd. Start the svcgssd service on the server with systemctl start rpc-svcgssd.

Kerberos authentication also requires the idmapd daemon to run on the server. For more information about configuring kerberized NFS, refer to the links in Section To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default. Proceed as follows:. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.

The default domain is localdomain. The firewall status is displayed next to the check box. When you start the YaST configuration client at a later time, it also reads the existing configuration from this file. On diskless systems, where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition.

With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section The nfs service takes care to start it properly; thus, start it by entering systemctl start nfs as root.

Then remote file systems can be mounted in the file system like local partitions using mount :. To import user directories from the nfs. To define a count of TCP conncetions that the clients makes to the NFS server, you can use the nconnect option of the mount command. You can specify any number between 1 and 16, where 1 is the default value if the mount option has not been specified.

The nconnect setting is applied only during the first mount process to the particular NFS server. If the same client executes the mount command to the same NFS server, all already established connections will be shared—no new connection will be established. To change the nconnect setting, you have to unmount all clients connections to the particular NFS server. Then you can define a new value of the nconnect option. Set a quota for one of the subvolumes that was listed in the previous step.

The size can either be specified in bytes , kilobytes K , megabytes M , or gigabytes 5G. To list the existing quotas, use the following command. In case you want to nullify an existing quota, set a quota size of none :. To disable quota support for a partition and all its subvolumes, use btrfs quota disable :. See the man 8 btrfs-qgroup and man 8 btrfs-quota for more details.

Btrfs allows to make snapshots to capture the state of the file system. Snapper, for example, uses this feature to create snapshots before and after system changes, allowing a rollback. This feature can, for example, be used to do incremental backups. A btrfs send operation calculates the difference between two read-only snapshots from the same subvolume and sends it to a file or to STDOUT.

A Btrfs receive operation takes the result of the send command and applies it to a snapshot. A Btrfs file system is required on the source side send and on the target side receive. It will be used as the basis for the next incremental backup and should be kept as a reference.

Send the initial snapshot to the target side. When the initial setup has been finished, you can create incremental backups and send the differences between the current and previous snapshots to the target side. The procedure is always the same:. Create a new snapshot on the source side and make sure it is written to the disk.

Send the difference between the previous snapshot and the one you have created to the target side. Keep all snapshots on both sides. With this option you can roll back to any snapshot on both sides while having all data duplicated at the same time. No further action is required. When doing the next incremental backup, keep in mind to use the next-to-last snapshot as parent for the send operation. Only keep the last snapshot on the source side and all snapshots on the target side. Only keep the last snapshot on both sides.

This way you have a backup on the target side that represents the state of the last snapshot made on the source side. It is not possible to roll back to other snapshots. To only keep the last snapshot on the source side, perform the following commands:.

As a consequence, you can also always use this subvolume name as a parent for the incremental send operation. To only keep the last snapshot on the target side, perform the following commands:. Btrfs supports data deduplication by replacing identical blocks in the file system with logical links to a single copy of the block in a common storage location. When used on a Btrfs file system, it can also be used to deduplicate these blocks.

To make it available, install the package duperemove. It is intended to be used to deduplicate a set of 10 to 50 large files that possibly have lots of blocks in common, such as virtual machine images. It operates in two modes: read-only and de-duping. When run in read-only mode that is without the -d switch , it scans the given files or directories for duplicated blocks and prints them.

This works on any file system. Running duperemove in de-duping mode is only supported on Btrfs file systems. After having scanned the given files or directories, the duplicated blocks will be submitted for deduplication. You may need to delete one of the default Btrfs subvolumes from the root file system for specific purposes.

The following procedure illustrates how to delete a Btrfs subvolume:. Notice that the root path has always subvolume ID '5'. The idea behind XFS was to create a high-performance bit journaling file system to meet extreme computing challenges. XFS is very good at manipulating large files and performs well on high-end hardware. At the creation time of an XFS file system, the block device underlying the file system is divided into eight or more linear regions of equal size.

Those are called allocation groups. Each allocation group manages its own inodes and free disk space. Practically, allocation groups can be seen as file systems in a file system.

Because allocation groups are rather independent of each other, more than one of them can be addressed by the kernel simultaneously. Naturally, the concept of independent allocation groups suits the needs of multiprocessor systems. XFS uses delayed allocation , which handles allocation by breaking the process into two pieces.

A pending transaction is stored in RAM and the appropriate amount of space is reserved. XFS still does not decide where exactly in file system blocks the data should be stored. This decision is delayed until the last possible moment. Some short-lived temporary data might never make its way to disk, because it is obsolete by the time XFS decides where actually to save it. In this way, XFS increases write performance and reduces file system fragmentation.

Because delayed allocation results in less frequent write events than in other file systems, it is likely that data loss after a crash during a write is more severe. Before writing the data to the file system, XFS reserves preallocates the free space needed for a file. Thus, file system fragmentation is greatly reduced.

Performance is increased because the contents of a file are not distributed all over the file system. The main advantages of this format are automatic checksums of all XFS metadata, file type support, and support for a larger number of access control lists for a file.

This will be problematic if the file system should also be used from systems not meeting these prerequisites. If you require interoperability of the XFS file system with older SUSE systems or other Linux distributions, format the file system manually using the mkfs. The origins of Ext2 go back to the early days of Linux history. The Extended File System underwent several modifications and, as Ext2, became the most popular Linux file system for years.

With the creation of journaling file systems and their short recovery times, Ext2 became less important. This might be the reason people often refer to it as rock-solid. After a system outage when the file system could not be cleanly unmounted, e2fsck starts to analyze the file system data.

In contrast to journaling file systems, e2fsck analyzes the entire file system and not only the recently modified bits of metadata. This takes significantly longer than checking the log data of a journaling file system. Depending on file system size, this procedure can take half an hour or more. Therefore, it is not desirable to choose Ext2 for any server that needs high availability. However, because Ext2 does not maintain a journal and uses less memory, it is sometimes faster than other file systems.

Because Ext3 is based on the Ext2 code and shares its on-disk format and its metadata format, upgrades from Ext2 to Ext3 are very easy. Ext3 was designed by Stephen Tweedie. Unlike all other next-generation file systems, Ext3 does not follow a completely new design principle. It is based on Ext2. These two file systems are very closely related to each other. An Ext3 file system can be easily built on top of an Ext2 file system.

The most important difference between Ext2 and Ext3 is that Ext3 supports journaling. In summary, Ext3 has three major advantages to offer:. The code for Ext2 is the strong foundation on which Ext3 could become a highly acclaimed next-generation file system. Its reliability and solidity are elegantly combined in Ext3 with the advantages of a journaling file system. Unlike transitions to other journaling file systems, such as ReiserFS or XFS, which can be quite tedious making backups of the entire file system and re-creating it from scratch , a transition to Ext3 is a matter of minutes.

It is also very safe, because re-creating an entire file system from scratch might not work flawlessly. Considering the number of existing Ext2 systems that await an upgrade to a journaling file system, you can easily see why Ext3 might be of some importance to many system administrators. Downgrading from Ext3 to Ext2 is as easy as the upgrade. Perform a clean unmount of the Ext3 file system and remount it as an Ext2 file system.

This means your metadata is always kept in a consistent state, but this cannot be automatically guaranteed for the file system data itself. Ext3 is designed to take care of both metadata and data. The file system driver collects all data blocks that correspond to one metadata update. These data blocks are written to disk before the metadata is updated.

As a result, consistency is achieved for metadata and data without sacrificing performance. This option is often considered the best in performance. It can, however, allow old data to reappear in files after crash and recovery while internal file system integrity is maintained.

Create an Ext3 journal by running tune2fs -j as the root user. More information about the tune2fs program is available in the tune2fs man page. This ensures that the Ext3 file system is recognized as such. The change takes effect after the next reboot.

To boot a root file system that is set up as an Ext3 partition, add the modules ext3 and jbd in the initrd. Do so by. An inode stores information about the file and its block location in the file system. As compared to SLES 10, when you make a new Ext3 file system on SLES 11, the default amount of space preallocated for the same number of inodes is doubled, and the usable space for files in the file system is reduced by that amount. Thus, you must use larger partitions to accommodate the same number of inodes and files than were possible for an Ext3 file system on SLES When you create a new Ext3 file system, the space in the inode table is preallocated for the total number of inodes that can be created.

The bytes-per-inode ratio and the size of the file system determine how many inodes are possible. When the file system is made, an inode is created for every bytes-per-inode bytes of space:. The number of inodes controls the number of files you can have in the file system: one inode for each file.

Your Software registration code is tied to this email address. Support and software notifications are sent to this address. By requesting this software download, I agree to receive emails related to this product and any necessary communications about patches and updates. I also agree to the Privacy Policy and Terms of Service. IT Modernization. SAP Solutions. AI and Analytics. Hybrid Cloud Solutions. Nonstop IT. Exit Federal Government. Partner Program. Find a Partner. Become a Partner.

Open Source Projects. SUSE Italia. SUSE Israel. SUSE Luxembourg.



0コメント

  • 1000 / 1000