%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /usr/lib/python2.7/site-packages/salt/states/
Upload File :
Create Path :
Current File : //usr/lib/python2.7/site-packages/salt/states/glusterfs.pyo

�
���^c@@s�dZddlmZmZmZmZddlZddljj	Z
ddlZddl
mZeje�Zddddd	d
ddd
dddgZd�Zd�Zeeedeeed�Zd�Zd�Zd�Zd�ZdS(u
Manage GlusterFS pool.
i(tabsolute_importtunicode_literalstprint_functiont
generatorsN(tSaltCloudExceptionuPeer {0} added successfully.uProbe on localhost not neededu%Host {0} is already in the peer groupu+Host {0} is already part of another clusteru-Volume on {0} conflicts with existing volumesu%UUID of {0} is the same as local uuiduZ{0} responded with "unknown peer". This could happen if {0} doesn't have localhost definedu-Failed to add peer. Information on {0}'s logsu9Cluster quorum is not met. Changing peers is not allowed.u2Failed to update list of missed snapshots from {0}u-Conflict comparing list of snapshots from {0}u,Peer is already being detached from cluster.cC@sdtkrdStS(u=
    Only load this module if the gluster command exists
    uglusterfs.list_volumesu	glusterfs(t__salt__tFalse(((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pyt__virtual__!sc@s�i�d6id6dd6td6}ytj�d�Wntk
rTd|d<|SXtjjj��}|dk	r�t	tjjj
��}|jtjjj��|j
|�r�t|d<d|d<|Sntd	�}|r't�fd
�|j�D��r't|d<dj��|d<|StdrRd
j��|d<d|d<|Std��sydj��|d<|Std	�}|r�t�fd�|j�D��r�t|d<dj��|d<i|d6|d6|d<ndj��|d<|S(uY
    Check if node is peered.

    name
        The remote host with which to peer.

    .. code-block:: yaml

        peer-cluster:
          glusterfs.peered:
            - name: two

        peer-clusters:
          glusterfs.peered:
            - names:
              - one
              - two
              - three
              - four
    unameuchangesuucommenturesultua-zA-Z0-9._-u Invalid characters in peer name.u$Peering with localhost is not neededuglusterfs.peer_statusc3@s|]}�|dkVqdS(u	hostnamesN((t.0tv(tname(s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pys	<genexpr>VsuHost {0} already peeredutestuPeer {0} will be added.uglusterfs.peeru5Failed to peer with {0}, please check logs for errorsc3@s|]}�|dkVqdS(u	hostnamesN((RR	(R
(s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pys	<genexpr>fsuHost {0} successfully peeredunewuolduHHost {0} was successfully peered but did not appear in the list of peersN(Rtsuct
check_nameRtsalttutilstnetworkthost_to_ipstNonetsettip_addrstupdatet	ip_addrs6tintersectiontTrueRtanytvaluestformatt__opts__(R
trettname_ipstthis_ipstpeerstnewpeers((R
s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pytpeered(sH






(



(
utcpc	
C@sOi|d6id6dd6td6}	tj|d�rBd|	d<|	Std�}
||
kr4td	r�d
j|�}|r�|d7}n||	d<d|	d<|	Std|||||||||�	}|s�d
j|�|	d<|	S|
}
td�}
||
krGi|
d6|
d6|	d<dj|�|	d<qGndj|�|	d<|r*td	rw|	dd|	d<d|	d<|	Sttd�|d�dkr�t|	d<|	dd|	d<q*td|�}|rt|	d<|	dd|	d<|	ds'idd6dd6|	d<q'q*|	dd|	d<|	Sntd	rAd|	d<n
t|	d<|	S(uQ
    Ensure that the volume exists

    name
        name of the volume

    bricks
        list of brick paths

    replica
        replica count for volume

    arbiter
        use every third brick as arbiter (metadata only)

        .. versionadded:: 2019.2.0

    start
        ensure that the volume is also started

    .. code-block:: yaml

        myvolume:
          glusterfs.volume_present:
            - bricks:
                - host1:/srv/gluster/drive1
                - host2:/srv/gluster/drive2

        Replicated Volume:
          glusterfs.volume_present:
            - name: volume2
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
            - replica: 2
            - start: True

        Replicated Volume with arbiter brick:
          glusterfs.volume_present:
            - name: volume3
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
              - host3:/srv/gluster/drive4
            - replica: 3
            - arbiter: True
            - start: True

    unameuchangesuucommenturesultua-zA-Z0-9._-u"Invalid characters in volume name.uglusterfs.list_volumesutestuVolume {0} will be createdu and starteduglusterfs.create_volumeuCreation of volume {0} failedunewuolduVolume {0} is createduVolume {0} already existsu and will be starteduglusterfs.infoustatusiu and is starteduglusterfs.start_volumeu and is now startedustartedustoppedu8 but failed to start. Check logs for further informationN(	RRRRRRRtintR(R
tbrickststripetreplicat	device_vgt	transporttstarttforcetarbiterRtvolumestcommenttvol_createdtold_volumestvol_started((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pytvolume_presentosb3







	


!





cC@s1i|d6id6dd6td6}td�}||kr\t|d<dj|�|d<|St||d�d	kr�d
j|�|d<t|d<|Stdr�dj|�|d<d|d<|Std
|�}|rt|d<dj|�|d<idd6dd6|d<nt|d<dj|�|d<|S(u�
    Check if volume has been started

    name
        name of the volume

    .. code-block:: yaml

        mycluster:
          glusterfs.started: []
    unameuchangesuucommenturesultuglusterfs.infouVolume {0} does not existustatusiuVolume {0} is already startedutestuVolume {0} will be starteduglusterfs.start_volumeuVolume {0} is startedustartedunewustoppeduolduchangeuFailed to start volume {0}N(RRRR"RRR(R
RtvolinfoR/((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pytstarted�s2








cC@s�i|d6id6dd6td6}td�}||krRdj|�|d<|St||d�d	kr�d
j|�|d<|Sg||dj�D]}|d^q�}t|�t|�s�t|d<d
j|�|d<|Std||�}|rit|d<dj|�|d<gtd�|dj�D]}|d^q7}i|d6|d6|d<|Sdj|�|d<|S(u

    Add brick(s) to an existing volume

    name
        Volume name

    bricks
        List of bricks to add to the volume

    .. code-block:: yaml

        myvolume:
          glusterfs.add_volume_bricks:
            - bricks:
                - host1:/srv/gluster/drive1
                - host2:/srv/gluster/drive2

        Replicated Volume:
          glusterfs.add_volume_bricks:
            - name: volume2
            - bricks:
              - host1:/srv/gluster/drive2
              - host2:/srv/gluster/drive3
    unameuchangesuucommenturesultuglusterfs.infouVolume {0} does not existustatusiuVolume {0} is not startedubricksupathu"Bricks already added in volume {0}uglusterfs.add_volume_bricksu'Bricks successfully added to volume {0}unewuoldu"Adding bricks to volume {0} failed(RRRR"RRR(R
R#RR1tbricktcurrent_brickstbricks_addedt
new_bricks((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pytadd_volume_bricks
s2


+

2cC@s3i|d6id6dd6td6}yttd|��}Wn4tk
rrt|d<td|�d|d<|SX||kr�dj||�|d<t|d<|Std	r�d
j||�|d<d|d<|Std|�}|dtkr|d|d<|S||d<i|d
6|d6|d<t|d<|S(u;
    .. versionadded:: 2019.2.0

    Add brick(s) to an existing volume

    name
        Volume name

    version
        Version to which the cluster.op-version should be set

    .. code-block:: yaml

        myvolume:
          glusterfs.op_version:
            - name: volume1
            - version: 30707
    unameuchangesuucommenturesultuglusterfs.get_op_versioniu7Glusterfs cluster.op-version for {0} already set to {1}utestuFAn attempt would be made to set the cluster.op-version for {0} to {1}.uglusterfs.set_op_versioniuoldunewN(RR"Rt	TypeErrorRRRR(R
tversionRtcurrenttresult((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pyt
op_versionFs4








cC@sxi|d6id6dd6td6}yttd|��}Wn4tk
rrt|d<td|�d|d<|SXyttd��}Wn1tk
r�t|d<td�d|d<|SX||kr�d	j|�|d<t|d<|Std
rdj|�|d<d|d<|Std|�}|d
tkrH|d|d<|S||d<i|d6|d6|d<t|d<|S(u�
    .. versionadded:: 2019.2.0

    Add brick(s) to an existing volume

    name
        Volume name

    .. code-block:: yaml

        myvolume:
          glusterfs.max_op_version:
            - name: volume1
            - version: 30707
    unameuchangesuucommenturesultuglusterfs.get_op_versioniuglusterfs.get_max_op_versionuJThe cluster.op-version is already set to the cluster.max-op-version of {0}utestu>An attempt would be made to set the cluster.op-version to {0}.uglusterfs.set_op_versioniuoldunewN(RR"RR8RRRR(R
RR:tmax_versionR;((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pytmax_op_versionzs@










(t__doc__t
__future__RRRRtloggingtsalt.utils.cloudRtcloudRtsalt.utils.networkR
tsalt.exceptionsRt	getLoggert__name__tlogtRESULT_CODESRR!RR0R2R7R<R>(((s9/usr/lib/python2.7/site-packages/salt/states/glusterfs.pyt<module>s4"			G	q	,	9	4

Zerion Mini Shell 1.0