Article From:https://www.cnblogs.com/zywu-king/p/9064032.html

      Whether you want to do itCloud platformprovideCeph Object storageAnd / orCeph Block equipment,Or want to deploy oneCeph file systemOr to use Ceph as his, allCeph Storage clusterDeployment begins with deployment one by oneCeph node、Network and Ceph storage cluster. Ceph storage cluster requires at least one Ceph Monitor and two OSD daemons. When running the Ceph file system client, there must be a metadata server (Metadata Ser).Ver).

  • Ceph OSDs: Ceph OSD Daemon( Ceph OSD )The function is to store data, to process data replication, recovery, backfilling, and rebalancing, and to provide some monitoring information to Ceph Monitors by checking the heartbeat of other OSD daemons. When the Ceph storage cluster is set to 2 copies, it needs at least 2 O.SD daemon, cluster can reachactive+clean State (Ceph has 3 copies by default, but you can adjust the number of copies).
  • Monitors: Ceph MonitorMaintain various charts showing the cluster state, including monitor graph, OSD diagram, reset group (PG) diagram, and CRUSH diagram. Ceph holds historical information on every state change occurring on Monitors, OSD and PG (called e).Poch).
  • MDSs: Ceph metadata server( MDS )byCeph file systemStorage metadata (that is to say, Ceph block devices and Ceph object stores do not use MDS). The metadata server enables users of the POSIX file system to perform such tasks without burdens on the Ceph storage cluster.lsfind Wait for the basic order.

 

Hardware recommendation

Ceph For ordinary hardware design, this can make construction and maintenance of PB level data cluster cheaper. When planning cluster hardware, we need to balance several factors, including regional failure and potential performance problems. Hardware planning should include the Ceph daemon and other entries that use Ceph clusters.The process is properly distributed. Usually, we recommend running only one type of daemon on a single machine. We recommend installing data clustering processes (such as OpenStack, CloudStack, etc.) on other machines.

 

Ceph It can run on cheap ordinary hardware, small production clusters and development clusters on general hardware.

image

 

Recommended operating system

 

The following table shows the correspondence between Ceph requirements and various Linux distributions. Generally speaking, Ceph has little dependence on the kernel and system initialization stage (such as sysvinit, upstart, SYSTEMd).

image 

Install (fast)

Step one: Pretest

Before deploying the Ceph storage cluster, it needs to beCeph ClientandCeph nodeFor some basic configuration, you can join the Ceph community for help.

  • Pretest
    • Install the Ceph deployment tool
      • Advanced package management tool (APT)
      • Red hat packet management tool (RPM)
    • Ceph Node installation
      • Install NTP
      • Install the SSH server
      • Create a user to deploy Ceph
      • Allow password – free SSH login
      • Boot time networking
      • Ensure connectivity
      • Open the required port
      • Terminal (TTY)
      • SELinux
      • Priority / preference
    • summary
Step two: storage cluster

After completing the preview, you can start deploying the Ceph storage cluster.

  • Storage cluster quick start
    • Create a cluster
    • Operation cluster
    • Extended cluster (expansion)
      • Add OSD
      • Add metadata server
      • Add the RGW routine
      • Add Monitors
    • Save / check object data
Step three: CEPH client

Most Ceph users do not store objects directly into the Ceph storage cluster, and they usually use Ceph block devices, Ceph filesystem, or Ceph objects to store one or more of these three major functions.

  • Fast introduction of block equipment
    • Install Ceph
    • Configuration block device
  • Quick start of file system
    • Dead work
    • Creating a file system
    • Creating a key file
    • Kernel driver
    • User space file system (FUSE)
    • Additional information
  • Fast introduction to object storage
    • Install the Ceph object gateway
    • New Ceph object gateway instance
    • Configuring an instance of the Ceph object gateway

 

Preview

New in version 0.60.

Thank you for trying Ceph! We suggest installing oneceph-deploy AdministrationnodeAnd a three nodeCeph Storage clusterTo study the basic characteristics of Ceph. This articlePretestWill prepare one for youceph-deploy Management nodes, as well as three Ceph nodes (or virtual machines), constitute the Ceph storage cluster. Before proceeding to the next step, see the operating system recommendation to confirm that you have installed the appropriate Linux distribution. If you deploy only a single Linux in the entire production clusterThe same version of the distribution will make it easier to troubleshoot problems encountered in the production environment.

In the following descriptionnodeIt represents a machine.

Install the CEPH deployment tool

Add the Ceph warehouse toceph-deploy Manage the node and install itceph-deploy

  1. On CentOS, the following commands can be executed:

    sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
  2. Add the source of the software package to the software warehouse. Use the text editor to create a YUM (Yellowdog Updater, Modified) library file, whose path is/etc/yum.repos.d/ceph.repo 。For example:

    sudo vim /etc/yum.repos.d/ceph.repo

    Stick the following contents in, and replace them with the latest master stable version of Ceph.{ceph-stable-release} (asfirefly ),Replace the name of your Linux distribution{distro} (asel6 For CentOS 6,el7 For CentOS 7,rhel6 For Red Hat 6.5,rhel7 For Red Hat 7,fc19 It’s Fedora 19,fc20 It’s Fedora 20). Finally saved to/etc/yum.repos.d/ceph.repo In the document.

    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://download.ceph.com/keys/release.asc
  3. Update the software library and install itceph-deploy

    sudo yum update && sudo yum install ceph-deploy

Note

You can also download the package from the European mirror eu.ceph.com.http://ceph.com/ replace withhttp://eu.ceph.com/ that will do

CEPH Node installation

Your management node must be able to access the Ceph nodes without password through SSH. Ifceph-deploy If you log in with an ordinary user, the user must have password free use.sudo Permissions.

Install NTP

We propose to install NTP services (especially Ceph Monitor nodes) on all Ceph nodes in order to avoid failure due to clock drift. Details are shown in the clock.

On CentOS / RHEL, execution:

sudo yum install ntp ntpdate ntp-doc

On Debian / Ubuntu, execution:

sudo apt-get install ntp

Ensure that NTP services are started on all Ceph nodes and use the same NTP server. See NTP for details.

Install the SSH server

stayAll Ceph The following steps are performed on the node:

  1. Install SSH servers at all Ceph nodes (if not yet):

    sudo apt-get install openssh-server

    perhaps

    sudo yum install openssh-server
  2. ensureAll Ceph The SSH server on the node is running.

Create a user to deploy CEPH

ceph-deploy The tool must login to the Ceph node with ordinary users, and this user has no password.sudo Permissions, because it does not need to enter passwords in the process of installing software and configuration files.

Newer editionceph-deploy Support--username Option provides no password usesudo The name of the username (includingroot ,althoughNot recommendDo so). Useceph-deploy --username {username} When ordered, the specified user must be able to connect to the Ceph node through the password free SSH, becauseceph-deploy The password will not be prompted in midway.

We suggest that in the clusterAll Ceph Node onceph-deploy Create a specific user, butDo notUse the name “CEPH”. The username of a complete group can simplify operation (non essential). However, you should avoid using well-known user names because hackers will use it to do brute force cracking.rootadmin{productname} )。The follow – up steps describe how to create nosudo Password users, you have to replace them with their own names.{username}

Note

From the Infernalis version, the user name “CEPH” is reserved for the Ceph daemon. If there is a “CEPH” user on the Ceph node, the user must be deleted before upgrading.

  1. Create new users at each of the Ceph nodes.

    ssh user@ceph-server
    sudo useradd -d /home/{username} -m {username}
    sudo passwd {username}
  2. Ensure that all newly created users on each Ceph node havesudo Jurisdiction。

    echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
    sudo chmod 0440 /etc/sudoers.d/{username}
Allow password – free SSH login

Just becauseceph-deploy If you do not support the input password, you must generate the SSH key on the management node and distribute its public key to each Ceph node.ceph-deploy You will try to generate the SSH key pair for the initial monitors.

  1. Generate the SSH key pair, but do not use itsudo orroot Users. When Enter passphrase is returned, the password is empty.

    ssh-keygen
    
    Generating public/private key pair.
    Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /ceph-admin/.ssh/id_rsa.
    Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
  2. Copy the public key to each Ceph node and put the following commands in the{username} Replace the user name in the user who created the deployment Ceph in front of you.

    ssh-copy-id {username}@node1
    ssh-copy-id {username}@node2
    ssh-copy-id {username}@node3
  3. (Recommended Practice) modificationceph-deploy Management node~/.ssh/config File, soceph-deploy You can login the Ceph node with your user name without any execution.ceph-deploy All must be specified--username {username} 。This is also simplified.ssh andscp How to use it. hold{username} Replace the name of the username you created.

    Host node1
       Hostname node1
       User {username}
    Host node2
       Hostname node2
       User {username}
    Host node3
       Hostname node3
       User {username}
Boot time networking

Ceph The OSD processes are interconnected through the Internet and report their status to Monitors. If the network default isoff ,Then the Ceph cluster cannot be launched online until you open the network.

Some distributions, such as CentOS, deactivate network interfaces by default. Therefore, we need to ensure that the NIC can be activated when the system is started, so that the Ceph daemon can communicate through the network. For example, on Red Hat and CentOS, you need to enter/etc/sysconfig/network-scripts Directory and ensureifcfg-{iface} In the fileONBOOT Set upyes

Ensure connectivity

useping Short host name (hostname -s )The way to confirm the network connectivity. Solve the possible host name resolution problem.

Note

The hostname should be resolved to the network IP address instead of the loopback interface IP address (that is, the hostname should be resolved to be non).127.0.0.1 IP address). If your management node is also a Ceph node, make sure that it correctly parse its host name and IP address (that is, the non looped IP address).

Open the required port

Ceph Monitors Default use between6789 Port communication, default between OSD6800:7300 Port communication within this range. For details, see the network configuration reference. Ceph OSD can use multiple network connections to communicate with clients, monitors and other OSD replication and heartbeat.

The default firewall configuration of some distributions, such as RHEL, is very strict, and you may need to adjust the firewall to allow the appropriate inbound requests so that the client can communicate with the daemon on the Ceph node.

For the RHEL 7firewalld ,To open the Ceph Monitors for the public domain6789 Port and OSD use6800:7300 The port scope is configured as a permanent rule, so the rule is still valid after reboot. For example:

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

If usediptables ,To open the Ceph Monitors6789 Port and OSD use6800:7300 Port range, the command is as follows:

sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT

Well configured on each nodeiptables After that, we must keep it, so it will still be effective after restarting. For example:

/sbin/service iptables save
Terminal (TTY)

Execute on CentOS and RHELceph-deploy The order may be wrong. If your Ceph node is set by defaultrequiretty ,implementsudo visudo Disable it and find itDefaults requiretty Option, change it toDefaults:ceph !requiretty Or directly annotate it, soceph-deploy You can connect with the user you created earlier (creating a user who deploys Ceph).

Note

Edit configuration file/etc/sudoers When it is necessary to usesudo visudo Instead of a text editor.

SELINUX

On CentOS and RHEL, SELinux is defaultEnforcing Open the state. To simplify installation, we propose to set SELinux toPermissive Or completely disabled, that is, before the reinforcement system configuration, ensure the installation and configuration of the cluster is no problem. Set SELinux as the following commandPermissive

sudo setenforce 0

To make the SELinux configuration permanent (if it is the root cause of the problem), you need to modify its configuration file./etc/selinux/config

Priority / preference

Ensure that your package manager has a priority / preference package installed and enabled. You may have to install EPEL on CentOS, and you may have to use the optional software library on RHEL.

sudo yum install yum-plugin-priorities

For example, on the RHEL 7 server, the following commands can be installed.yum-plugin-prioritiesAnd enabledrhel-7-server-optional-rpms Software library:

sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms

Storage cluster quick start

 

If you haven’t finished the pretest, please finish it first. This articlequick get startuseceph-deploy Establish one from the management nodeCeph Storage cluster,The cluster contains three nodes to explore the function of Ceph.

 

For the first time, we create a Ceph storage cluster, which has a Monitor and two OSD daemons. Once the cluster is reachedactive + clean State and expand it: add third OSD, add metadata server and two Ceph Monitors. To get the best experience, create a directory on the management node for preservation.ceph-deploy The generated configuration file and key pair.

mkdir my-cluster
cd my-cluster

ceph-deploy You will export the file to the current directory, so make sure that you execute it in this directory.ceph-deploy

Important

If you are logged in with another ordinary user, do not use it.sudo Or in the case ofroot Identity operationceph-deploy ,Because it does not call the needed on the remote hostsudo Order.

Disablerequiretty

On some distributions, such as CentOS, executeceph-deploy Command, if your Ceph node is set by default.requiretty That will meet the mistake. This function can be disable in this way: Executionsudo visudo ,findDefaults requiretty Option, change it toDefaults:ceph !requiretty ,suchceph-deploy Can be usedceph User log in and usesudo That’s right.

Create a cluster

If you are in trouble in some places and want to start again, you can use the following commands to clear the configuration:

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

Use the following command to remove the Ceph installation package.

ceph-deploy purge {ceph-node} [{ceph-node}]

If it is executedpurge ,You have to reinstall Ceph.

On the management node, enter the directory that just created the placement file.ceph-deploy The following steps are carried out.

  1. Create a cluster.

    ceph-deploy new {initial-monitor-node(s)}

    For example:

    ceph-deploy new node1

    Use in the current directoryls andcat inspectceph-deploy The output should have a Ceph configuration file, a monitor key ring and a log file. For details, see ceph-deploy new -h.

  2. The number of default copies in the Ceph configuration file from3 Change into2 ,So only two OSD can be achievedactive + clean State. Add the following line to the line[global] Paragraph:

    osd pool default size = 2
  3. If you have multiple network cards, you canpublic network Write to the Ceph configuration file[global] Below. For details, see the network configuration reference.

    public network = {ip-address}/{netmask}
  4. Install Ceph.

    ceph-deploy install {ceph-node} [{ceph-node} ...]

    For example:

    ceph-deploy install admin-node node1 node2 node3

    ceph-deploy The Ceph will be installed at each node.Note:If you have done itceph-deploy purge ,You have to do this again to install Ceph.

  5. Configure the initial monitor (s) and collect all the keys:

    ceph-deploy mon create-initial

    After completing these operations, these key rings should appear in the current directory.

    • {cluster-name}.client.admin.keyring
    • {cluster-name}.bootstrap-osd.keyring
    • {cluster-name}.bootstrap-mds.keyring
    • {cluster-name}.bootstrap-rgw.keyring

Note

The bootstrap-rgw key ring is only created when Hammer or higher editions are installed.

Note

If this step is failed and output similar to the following information “Unable to find /etc/ceph/ceph.client.admin.keyring”, please confirm the I specified in ceph.conf for monitorP is Public IP, not Private IP.

  1. Add two OSD. For quick installation, this quick start uses the directory instead of the entire hard disk for the OSD daemon. How to use separate hard disk or partition for OSD and its logs, please refer to ceph-deploy OSD. Log in to the Ceph node,And create a directory for the OSD daemon.

    ssh node2
    sudo mkdir /var/local/osd0
    exit
    
    ssh node3
    sudo mkdir /var/local/osd1
    exit

    Then, execute from the management nodeceph-deploy To prepare for OSD.

    ceph-deploy osd prepare {ceph-node}:/path/to/directory

    For example:

    ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

    Finally, activate OSD.

    ceph-deploy osd activate {ceph-node}:/path/to/directory

    For example:

    ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
  2. useceph-deploy Copy the configuration file and the admin key to the management node and the Ceph node, so that you do not need to specify the monitor address whenever you execute the Ceph command line.ceph.client.admin.keyring That’s right.

    ceph-deploy admin {admin-node} {ceph-node}

    For example:

    ceph-deploy admin admin-node node1 node2 node3

    ceph-deploy And local management host (admin-node )When communicating, it must be reachable by the host name. Can be modified when necessary/etc/hosts ,Add the name of the management host.

  3. Make sure you’re rightceph.client.admin.keyring Have correct operation permissions.

    sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  4. Check the health of the cluster.

    ceph health

    When peering is completed, the cluster should be reachedactive + clean State.

Operation cluster

useceph-deploy When the deployment is completed, it automatically starts the cluster. To operate cluster daemons under the Debian/Ubuntu distribution, see running Ceph with Upstart; to CentOS, Red Hat, Fedora and SLESTo operate the cluster daemon, see running Ceph with sysvinit.

For peering and cluster health, see the monitoring cluster; for the OSD daemon and the placement group (placement group) health conditions, see monitoring OSD and the reset group; for user management, see user management.

Ceph After the cluster deployment is completed, you can try management functions.rados After the object store command, you can continue the quick start manual to understand Ceph block device, Ceph file system and Ceph object gateway.

Extended cluster (expansion)

After a basic cluster is started and started, the next step is to expand the cluster. staynode1 Add a OSD daemon and a metadata server. And then,node2 andnode3 Ceph Monitor is added to form a quorum of Monitors.

Add OSD

The three node cluster you run is just for demonstration, and OSD is added to the monitor node.

ssh node1
sudo mkdir /var/local/osd2
exit

Then, fromceph-deploy The node prepares the OSD.

ceph-deploy osd prepare {ceph-node}:/path/to/directory

For example:

ceph-deploy osd prepare node1:/var/local/osd2

Finally, activate OSD.

ceph-deploy osd activate {ceph-node}:/path/to/directory

For example:

ceph-deploy osd activate node1:/var/local/osd2

Once you add OSD, the Ceph cluster starts to rebalance and migrate the reset group to the new OSD. You can use the followingceph Command the process to observe this process:

ceph -w

You should be able to see the status of the reset group fromactive + clean Change intoactive ,There are also some downgrading objects; when the migration is completed, it will return.active + clean State (Control-C exit).

Add metadata server

At least one metadata server is required to use CephFS to create the metadata server by executing the following commands:

ceph-deploy mds create {ceph-node}

For example:

ceph-deploy mds create node1

Note

Ceph in current production environment can only run a metadata server. You can configure more than one, but now we will not provide commercial support for clusters of multiple metadata servers.

Add the RGW routine

To use CephCeph Object gatewayComponents must be deployedRGW Routine. Create a new RGW routine using the following methods:

ceph-deploy rgw create {gateway-node}

For example:

ceph-deploy rgw create node1

Note

This function is fromHammer Version andceph-deploy v1.5.23 It is only in the beginning.

RGW The routine will listen to 7480 ports by default, and can change the node ceph.conf andRGW The related configuration is as follows:

[client]
rgw frontends = civetweb port=80

The IPv6 address is used.

[client]
rgw frontends = civetweb port=[::]:80
Add MONITORS

Ceph The storage cluster needs at least one Monitor to run. To achieve high availability, a typical Ceph storage cluster will run multiple Monitors, which does not affect the availability of a Ceph storage cluster when a single Monitor fails. CephUsing the PASOX algorithm, this algorithm requires a majority of monitors (i.e. 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two monitors to the Ceph cluster.

ceph-deploy mon add {ceph-node}

For example:

ceph-deploy mon add node2 node3

After adding Monitor, Ceph will automatically start to synchronize and form a quorum. You can check the quorum status with the following commands:

ceph quorum_status --format json-pretty

Tip

When your Ceph cluster runs multiple monitor, all monitor hosts areShouldConfigure NTP, and make sure that these monitor are located at the same level of NTP service.

Save / check object data

To store objects in the Ceph storage cluster, the client must:

  1. Specified object name
  2. Specified storage pool

Ceph The client checks out the latest cluster running graph, and uses the CRUSH algorithm to figure out how to map objects to the set, and then dynamically calculate how to assign the set to the OSD. To locate objects, you only need the name of the object and the name of the storage pool.

ceph osd map {poolname} {object-name}

Practice: locating an object

As an exercise, we first create an object to userados put The command adds the object name, a data test file path, and specifies the storage pool. For example:

echo {Test-data} > testfile.txt
rados put {object-name} {file-path} --pool=data
rados put test-object-1 testfile.txt --pool=data

To confirm that the Ceph storage cluster stores this object, it can be executed:

rados -p data ls

Now, the location object:

ceph osd map {pool-name} {object-name}
ceph osd map data test-object-1

Ceph The location of the object should be output, for example:

osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]

The test object can be deleted with the “rados rm“ command, for example:

rados rm test-object-1 --pool=data

As the cluster is running, the location of the object may change dynamically. Ceph has dynamic equilibrium mechanism without manual intervention.

 

Fast introduction of block equipment

To practice this manual, you must first complete the quick start of storage cluster and ensure thatCeph Storage clusterBe inactive + clean State, so that it can be usedCeph Block equipment

Note

Ceph Block devices are also calledRBD orRADOS Block equipment.

You can run on a virtual machineceph-client Nodes, however, cannot perform the following steps on the same physical nodes as Ceph storage clusters (unless they also use VM). For details, see FAQ.

Install CEPH

  1. Confirm that you have used the appropriate kernel version. See the operating system recommendation for details.

    lsb_release -a
    uname -r
    
  2. On the management node, throughceph-deploy Install Ceph toceph-client Node.

    ceph-deploy install ceph-client
  3. On the management node, useceph-deploy The Ceph configuration file andceph.client.admin.keyring copy toceph-client

    ceph-deploy admin ceph-client

    ceph-deploy The tool copies the key ring to/etc/ceph Directory to ensure that this key ring file has read permission (for example).sudo chmod +r /etc/ceph/ceph.client.admin.keyring )。

Configuration block device

  1. stayceph-client A block device, image, is created on the node.

    rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
  2. stayceph-client On the node, the image is mapped to a block device.

    sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
  3. stayceph-client On the node, after creating the file system, you can use the block device.

    sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo

    This command may take a long time.

  4. stayceph-client The file system is mounted on the node.

    sudo mkdir /mnt/ceph-block-device
    sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
    cd /mnt/ceph-block-device

 

 

CEPH Quick start of file system

Start the practiceCeph file systemBefore you start the Handbook, you must first complete the quick introduction of storage cluster. Complete this entry on the management node.

Dead work

  1. Confirm that you have used the appropriate kernel version. See the operating system recommendation for details.

    lsb_release -a
    uname -r
    
  2. On the management node, throughceph-deploy Install Ceph toceph-client The node is on the node.

    ceph-deploy install ceph-client
  3. ensureCeph Storage clusterIn operation, and inactive + clean State. At the same time, make sure there is at least oneCeph metadata serverIt’s running.

    ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]

Creating a file system

Although the metadata server has been created (storage cluster quick start), it will not become active if you do not create a storage pool and a file system. See seeCreating a Ceph file system

ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata <pg_num>
ceph fs new <fs_name> cephfs_metadata cephfs_data

Creating a key file

Ceph The storage cluster is enabled by default. You should have a configuration file containing the key (but not the key ring itself). The following method is used to obtain a user’s key:

  1. Find a key corresponding to a user in the key ring file, for example:

    cat ceph.client.admin.keyring
  2. Find the user to mount the Ceph file system and copy its key. It looks like the following:

    [client.admin]
       key = AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
  3. Open the text editor.

  4. Stick the key in, probably like this.

    AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
  5. Save the file and use its usernamename As an attribute (such asadmin.secret )。

  6. Ensure that this file has appropriate permissions for users, but is not visible to other users.

Kernel driver

Mount Ceph FS as a kernel driver.

sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs

Ceph Storage clusters require authentication by default, so the user name is required when mounted.name And create the key file in the section of the key file.secretfile ,For example:

sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret

Note

Mount the Ceph FS file system from the management node instead of the server node. For details, see FAQ.

User space file system (FUSE)

Mount Ceph FS as a user space file system (FUSE).

sudo mkdir ~/mycephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 ~/mycephfs

Ceph The storage cluster acquiescence authentication requires that the corresponding key ring file be specified unless it is in the default location./etc/ceph ):

sudo ceph-fuse -k ./ceph.client.admin.keyring -m 192.168.0.1:6789 ~/mycephfs

 

CEPH Fast introduction to object storage

From firefly (v0.80), the Ceph storage cluster significantly simplifies the installation and configuration of Ceph object gateways. The gateway daemon is embedded with Civetweb, without installing web servers or configuring FastCGI. In addition, it can beDirect useceph-deploy To install the gateway package, generate the key, configure the data directory, and create a gateway instance.

Tip

Civetweb Default use7480 Port. Either directly open7480 Port, or set the preferred port in your Ceph configuration file (for example80 Port).

To use the Ceph object gateway, please perform the following steps:

Install the CEPH object gateway

  1. stayclient-node The pre installation steps are performed on the board. If you are going to use the default port of Civetweb7480 ,Must pass throughfirewall-cmd oriptables Come and open it. For details, see the preview.

  2. From the working directory of the management node,client-node The Ceph object gateway software package is installed on it. For example:

    ceph-deploy install --rgw <client-node> [<client-node> ...]

New CEPH object gateway instance

From the working directory of the management node,client-node A new Ceph object gateway instance is built up. For example:

ceph-deploy rgw create

Once the gateway starts to run, you can pass it7480 The port is used to access it (such ashttp://client-node:7480 )。

Configuring an instance of the CEPH object gateway

  1. By modifying the Ceph configuration file, you can change the default port (for example, change80 )。Add the name to[client.rgw.<client-node>] The small section,<client-node> Replace the short name of your Ceph client node.hostname -s Output). For example, your node name isclient-node ,stay[global] After the festival, add a small section similar to the following:

    [client.rgw.client-node]
    rgw_frontends = "civetweb port=80"
    

    Note

    Ensure thatrgw_frontends Key value pairport=<port-number> There is no space in it.

    Important

    If you intend to use the 80 port, make sure that the Apache server is not using the port, otherwise it will conflict with Civetweb. In this case, we propose to remove the Apache service.

  2. To enable the new port to take effect, the Ceph object gateway needs to be restarted. On RHEL 7 and Fedora, execution:

    sudo systemctl restart ceph-radosgw.service

    On RHEL 6 and Ubuntu, execution:

    sudo service radosgw restart id=rgw.<short-hostname>
  3. Finally, check the firewall of the node to ensure that the port you select (for example)80 The port is in an open state. If not, add the port to the release rule and reload the firewall configuration. For example:

    sudo firewall-cmd --list-all sudo firewall-cmd --zone=public --add-port
    80/tcp --permanent
    sudo firewall-cmd --reload

    On the use offirewall-cmd oriptables For details of configuring firewalls, please refer to preview.

    You should be able to generate an unauthorized request and receive an answer. For example, a request with no parameters is as follows:

    http://<client-node>:80

    This response should be received:

    <?xml version="1.0" encoding="UTF-8"?>
    <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
      <Owner>
        <ID>anonymous</ID>
        <DisplayName></DisplayName>
      </Owner>
      <Buckets>
      </Buckets>
    </ListAllMyBucketsResult>
Link of this Article: Ceph basic knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *