%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /usr/lib/python2.7/site-packages/salt/fileserver/
Upload File :
Create Path :
Current File : //usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyc

�
���^c@@s�dZddlmZmZmZddlZddlZddlZddlZddl	Z	ddl
jZddl
ZddlZddlZddlZddlZddlmZddlmZddlmZe	je�ZdZeZd�Z d	�Z!d
d�Z"d�Z#d
�Z$d�Z%d�Z&d�Z'd�Z(d�Z)d�Z*d�Z+d�Z,d�Z-d�Z.d�Z/d�Z0d�Z1d�Z2d�Z3e4d�Z5d�Z6dS(uN
Amazon S3 Fileserver Backend

.. versionadded:: 0.16.0

This backend exposes directories in S3 buckets as Salt environments. To enable
this backend, add ``s3fs`` to the :conf_master:`fileserver_backend` option in the
Master config file.

.. code-block:: yaml

    fileserver_backend:
      - s3fs

S3 credentials must also be set in the master config file:

.. code-block:: yaml

    s3.keyid: GKTADJGHEIQSXMKKRBJ08H
    s3.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs

Alternatively, if on EC2 these credentials can be automatically loaded from
instance metadata.

This fileserver supports two modes of operation for the buckets:

1. :strong:`A single bucket per environment`

   .. code-block:: yaml

    s3.buckets:
      production:
        - bucket1
        - bucket2
      staging:
        - bucket3
        - bucket4

2. :strong:`Multiple environments per bucket`

   .. code-block:: yaml

    s3.buckets:
      - bucket1
      - bucket2
      - bucket3
      - bucket4

Note that bucket names must be all lowercase both in the AWS console and in
Salt, otherwise you may encounter ``SignatureDoesNotMatch`` errors.

A multiple-environment bucket must adhere to the following root directory
structure::

    s3://<bucket name>/<environment>/<files>

.. note:: This fileserver back-end requires the use of the MD5 hashing algorithm.
    MD5 may not be compliant with all security policies.

.. note:: This fileserver back-end is only compatible with MD5 ETag hashes in
    the S3 metadata. This means that you must use SSE-S3 or plaintext for
    bucket encryption, and that you must not use multipart upload when
    uploading to your bucket. More information here:
    https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html

    Objects without an MD5 ETag will be fetched on every fileserver update.

    If you deal with objects greater than 8MB, then you should use the
    following AWS CLI config to avoid mutipart upload:

    .. code-block:: text

        s3 =
          multipart_threshold = 1024MB

    More info here:
    https://docs.aws.amazon.com/cli/latest/topic/s3-config.html
i(tabsolute_importtprint_functiontunicode_literalsN(tsix(tfilter(tquoteicC@st�}t|j��S(u^
    Return a list of directories within the bucket that can be
    used as environments.
    (t_inittlisttkeys(tmetadata((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pytenvsns	c
C@s�t�}tr�tjd�x�tj|�D]�\}}x�t|�D]u}xltj|�D][\}}xL|D]D}t|||�}tjd|||�t|||||�qnWq[WqEWq,Wtjd�ndS(u/
    Update the cache file for the bucket.
    uSyncing local cache from S3...u%s - %s : %su#Sync local cache from S3 completed.N(	RtS3_SYNC_ON_UPDATEtlogtinfoRt	iteritemst_find_filest_get_cached_file_namet_get_file_from_s3(R	tsaltenvtenv_metatbucket_filestbuckettfilest	file_pathtcached_file_path((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pytupdateys	

&ubasec
K@s;d|kr|jd�nidd6dd6}t�}|sL||krP|St||�}t�s�tjj||�}nxi|D]a}xWtj	|�D]C\}}||kr�t
jt|�r�||d<||d<Pq�q�Wq�Pq�W|ds|dr|St
|d||�}	t|||d||	�|S(u�
    Look through the buckets cache file for a match.
    If the field is found, it is retrieved from S3 only if its cached version
    is missing, or if the MD5 does not match.
    uenvubucketupathN(tpoptNoneRRt_is_env_per_buckettostpathtjoinRRtfstis_file_ignoredt__opts__RR(
RRtkwargstfndR	t	env_filesRtbucket_nameRR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt	find_file�s.

		


cC@s�d|kr|jd�ni}d|kr2|Sd|ksUd|ksU|drY|St|d|d|d�}tjj|�r�tjjj|�|d<d|d<n|S(u!
    Return an MD5 file hash
    uenvusaltenvupathubucketuhsumumd5u	hash_type(	RRRRtisfiletsalttutilst	hashutilstget_hash(tloadR$tretR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt	file_hash�s#

c
C@s�d|kr|jd�nidd6dd6}d|ksTd|ksTd|krX|Sd|kspd|krt|S|jd	d
�}t|d|d|d�}t|dg|d�d
|d<tjjj|d���}|j	|d�|j
td�}|r<tj
r<tjjj|�r<|jt�}n|rm|rmtjjj||�}||d	<n||d<Wd
QX|S(u?
    Return a chunk from a file based on the data received
    uenvuudataudestupathulocusaltenvubucketugzipiurbufile_buffer_sizeN(RtgetRRt_trim_env_off_pathR)R*RtfopentseektreadR"RtPY3t	is_binarytdecodet__salt_system_encoding__t	gzip_utiltcompress(R-R$R.tgzipRtfp_tdata((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt
serve_file�s0

$
"%
cC@s�d|kr|jd�ng}d|kr2|S|d}t�}|sX||kr\|Sxpt||�D]^}xUtj|�D]D}g|D]}tjt|�s�|^q�}|t||�7}q�WqmW|S(uR
    Return a list of all files on the file server in a specified environment
    uenvusaltenv(	RRRRt
itervaluesR R!R"R1(R-R.RR	RtbucketstfR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt	file_list�s
	+cC@st�gS(u>
    Return a list of all empty directories on the master
    (R(R-((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pytfile_list_emptydirsscC@s�d|kr|jd�ng}d|kr2|S|d}t�}|sX||kr\|Sxjt||�D]X}xOtj|�D]>}t||dt�}|g|D]}|r�|^q�7}q�WqmW|S(u8
    Return a list of all directories on the master
    uenvusaltenvt
trim_slash(RRt
_find_dirsRR?R1tTrue(R-R.RR	Rtdirst_f((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pytdir_list#s
	+cC@s�dtkrtdnd
}dtkr2tdnd
}dtkrNtdnd
}dtkrjtdnd
}dtkr�tdnd
}dtkr�tdnd
}dtkr�tdnd
}d	tkr�td	nd
}||||||||fS(u,
    Get AWS keys from pillar or config
    us3.keyus3.keyidus3.service_urlu
s3.verify_sslu
aws.kms.keyidu
aws.kmw.keyidus3.locationu
s3.path_styleus3.https_enableN(R"R(tkeytkeyidtservice_urlt
verify_sslt	kms_keyidtlocationt
path_stylethttps_enable((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt_get_s3_keyAscC@s}t�}tj�t}d}y+tjj|�|krIt|�}nWntk
r]nX|dkryt	|�}n|S(ux
    Connect to S3 and download the metadata for each file in all buckets
    specified and cache the data to disk.
    N(
t_get_buckets_cache_filenamettimetS3_CACHE_EXPIRERRRtgetmtimet_read_buckets_cache_filetOSErrort_refresh_buckets_cache_file(t
cache_filetexpR	((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR\s	
cC@stjjtdd�S(u,
    Return the path to the s3cache dir
    ucachedirus3cache(RRRR"(((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt_get_cache_dirsscC@s\tjjt�|||�}tjjtjj|��sXtjtjj|��n|S(u<
    Return the cached file name for a bucket path file
    (RRRR\texiststdirnametmakedirs(R&RRR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR|scC@s>t�}tjj|�s+tj|�ntjj|d�S(ui
    Return the filename of the cache for bucket contents.
    Create the path if it does not exist.
    ubuckets_files.cache(R\RRR]R_R(t	cache_dir((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyRS�s	c@stjd�t�\}}��}���i}||�����fd�}t�r�xRtjt��D]9\}}g}x|D]}	i}
||	�}|s�q�ng|D]}d|kr�|^q�|
|	<|j|
�|
|	s�i}
x6|D].}d|ksd|kr�|
j|�q�q�Wytj	d|
d|	�w�Wq�t
k
r�d|
kr�tj	d|
d|	�q�q�tj	d�iSq�Xq�q�W|||<qqWnx�t�D]�}	||	�}|s�q�ng|D]}d|kr�|^q�}|s�i}
x6|D].}d|ks/d|kr|
j|�qqWytj	d|
d|	�w�Wq�t
k
r�d|
kr�tj	d|
d|	�q�q�tj	d�iSq�Xng|D],}tjj
|d�jdd	�d
^q�}t|�}x�|D]�}g|D]}|dj|�r|^q}||krIg||<nt}x8||D],}
|	|
krZ|
|	c|7<t}PqZqZW|s�||ji||	6�q�q�Wq�Wtjj|�r�tj|�ntjd�tjjj|d��}tj||�Wd
QX|S(ub
    Retrieve the content of all buckets and cache the metadata to the buckets
    cache file
    uRefreshing buckets cache filec@s�gd}}x�tr�tdd|d|d|d|d�d�d	�d
td�d�d
i|d6�}g}x+|D]#}d|kr�Pn|j|�qyW|j|�tg|D]}|jdd�dk^q��r�Pn|dd}qW|S(Nuus3.queryRJRKRNRRLRMROt
return_binRPRQtparamsumarkeruKeyuIsTruncatedufalsei����(RFt	__utils__tFalsetappendtextendtallR0(RRJRKR.tmarkerttmptheaderstheader(RQRORPRLRM(s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt
__get_s3_meta�s.
	

1uKeyuCodeuMessageu'%s' response for bucket '%s'u2S3 Error! Do you have any files in your S3 bucket?u/iiuWriting buckets cache fileuwN(RtdebugRRRRRt_get_bucketsReRtwarningtKeyErrorRRR^tsplittsett
startswithRdRFR(tremoveR)R*RR2tpickletdump(RZRJRKRNR	RlRR@tbucket_files_listR&Rts3_metatkt
meta_responseRtenvironmentsR%tfoundR<((RQRORPRLRMs8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyRY�s�
!!	
)



%

9
,
#
cC@sttjd�tjjj|d��I}ytj|�}Wn,tjt	t
ttt
fk
rid}nXWdQX|S(u7
    Return the contents of the buckets cache file
    uReading buckets cache fileurbN(RRmR)R*RR2RuR-tUnpicklingErrortAttributeErrortEOFErrortImportErrort
IndexErrorRpR(RZR<R=((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyRW,s
c	C@s�g}i}x�|D]�}x�tj|�D]�\}}g|D]}|d^q<}g|D]}|jd�sY|^qY}||kr�t||<|ji||6�q)x.|D]&}||kr�||c|7<Pq�q�Wq)WqW|S(uA
    Looks for all the files in the S3 bucket cache metadata
    uKeyu/(RRtendswithRFRe(	R	R.R|tbucket_dictR&R=Ryt	filepathsR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR=s
(

c
C@s;g}i}x(|D] }xtj|�D]\}}t�}xjg|D]}|d^qHD]K}d}	x<|jd�d D]'}
|	|
d}|j|�|}	q|Wq\W||kr�t||<|jit|�|6�q)xN|D]F}||kr�||ct|�7<tt||��||<Pq�q�Wq)WqW|S(u�
    Looks for all the directories in the S3 bucket cache metadata.

    Supports trailing '/' keys (as created by S3 console) as well as
    directories discovered in the path of file keys.
    uKeyuu/i����(RRRrRqtaddRFReR(
R	R.R|R�R&R=tdirpathsRyRtprefixtpartt	directoryR((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyRETs(
	$


c	C@s�||kr||ni}i}x'|D]}||kr)||}q)q)Wtttd�|���}x]|D]U}d|krq|d|krqy|djd�|d<Wntk
r�nX|SqqWdS(uA
    Looks for a file's metadata in the S3 bucket cache file
    cS@s
d|kS(NuKey((Ry((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt<lambda>}suKeyuETagu"N(RRtstripRp(	R	R&RRRtbucket_metaRt
files_metat	item_meta((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt_find_file_metats


cC@sdtkrtdSiS(u*
    Return the configuration buckets
    u
s3.buckets(R"(((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyRn�scC@s�t�\}}}}}	}
}}tjj|�rTt||||�}
|
rT|
d}|jd�dkr�|}tjjj	|d�}||krNdSqQtj
|�}|j}tjj
|j�}tjj|
dd�}|t|
d�krQ||krQtjd	�td
d|d|d
|ddd|d|d|d|
dt|�d|dtd|d|�
}|dk	rNx�|dj�D]�\}}|j�}|j�}tj|�j�dkr�tjj|d�}q�tj|�j�dkr�t|�}q�q�W||krK||krKtjd|||�dSqNqQqTntd
d|d|d
|d|d|d|d|
dt|�d|d|d|�dS(uw
    Checks the local cache for the file, if it's old or missing go grab the
    file from S3 and update the cache
    uETagu-i����umd5NuLastModifiedu%Y-%m-%dT%H:%M:%S.%fZuSizeuicached file size equal to metadata size and cached file mtime later than metadata last modification time.us3.queryRJRKRNtmethoduHEADRRLRMRORt
local_filetfull_headersRPRQuheadersu
last-modifiedu%a, %d %b %Y %H:%M:%S %Zucontent-lengthuW%s - %s : %s skipped download since cached file size equal to and mtime after s3 values(RRRRR(R�tfindR)R*R+R,tstattst_sizetdatetimet
fromtimestamptst_mtimetstrptimetintRRmRct_quoteRFRtitemsR�Rt	text_typetlowerR
(R	RR&RRRJRKRLRMRNRORPRQt	file_metat	file_etagtfile_md5t
cached_md5tcached_file_stattcached_file_sizetcached_file_mtimetcached_file_lastmodR.theader_nametheader_valuetnametvaluet
s3_file_mtimets3_file_size((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR�sx!
			

		

cC@sOt�rdn
t|�d}|r+dnd}g|D]}|||!^q8S(uH
    Return a list of file paths with the saltenv directory removed
    ii����N(RRtlen(tpathsRRDtenv_lent	slash_lentd((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR1�scC@s?t�}t|t�rtSt|t�r/tStd��dS(u�
    Return the configuration mode, either buckets per environment or a list of
    buckets that have environment dirs in their root
    u)Incorrect s3.buckets type given in configN(Rnt
isinstancetdictRFRRdt
ValueError(R@((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyR�s	(7t__doc__t
__future__RRRR�RRTRutloggingtsalt.fileservert
fileserverR tsalt.modulesR)tsalt.utils.filestsalt.utils.gzip_utiltsalt.utils.hashutilstsalt.utils.versionstsalt.extRtsalt.ext.six.movesRtsalt.ext.six.moves.urllib.parseRR�t	getLoggert__name__RRURFRR
RR'R/R>RBRCRIRRRR\RRSRYRWRRER�RnRRdR1R(((s8/usr/lib/python2.7/site-packages/salt/fileserver/s3fs.pyt<module>OsP		,		'		
							
	�			 			R

Zerion Mini Shell 1.0