%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /lib/python2.7/site-packages/salt/pillar/
Upload File :
Create Path :
Current File : //lib/python2.7/site-packages/salt/pillar/s3.pyo

�
���^c@@sddZddlmZmZmZddlZddlZddlZddlZddl	m
Z
ddlmZddl
mZddlmZddlmZddlZddlZeje�Zd	efd
��YZddededdddd
eeed�
Zd�Z d�Z!d�Z"d�Z#d�Z$d�Z%d�Z&d�Z'd�Z(dS(u�
Copy pillar data from a bucket in Amazon S3

The S3 pillar can be configured in the master config file with the following
options

.. code-block:: yaml

    ext_pillar:
      - s3:
          bucket: my.fancy.pillar.bucket
          keyid: KASKFJWAKJASJKDAJKSD
          key: ksladfDLKDALSFKSD93q032sdDasdfasdflsadkf
          multiple_env: False
          environment: base
          prefix: somewhere/overthere
          verify_ssl: True
          service_url: s3.amazonaws.com
          kms_keyid: 01234567-89ab-cdef-0123-4567890abcde
          s3_cache_expire: 30
          s3_sync_on_update: True
          path_style: False
          https_enable: True

The ``bucket`` parameter specifies the target S3 bucket. It is required.

The ``keyid`` parameter specifies the key id to use when access the S3 bucket.
If it is not provided, an attempt to fetch it from EC2 instance meta-data will
be made.

The ``key`` parameter specifies the key to use when access the S3 bucket. If it
is not provided, an attempt to fetch it from EC2 instance meta-data will be made.

The ``multiple_env`` defaults to False. It specifies whether the pillar should
interpret top level folders as pillar environments (see mode section below).

The ``environment`` defaults to 'base'. It specifies which environment the
bucket represents when in single environments mode (see mode section below). It
is ignored if multiple_env is True.

The ``prefix`` defaults to ''. It specifies a key prefix to use when searching
for data in the bucket for the pillar. It works when multiple_env is True or False.
Essentially it tells ext_pillar to look for your pillar data in a 'subdirectory'
of your S3 bucket

The ``verify_ssl`` parameter defaults to True. It specifies whether to check for
valid S3 SSL certificates. *NOTE* If you use bucket names with periods, this
must be set to False else an invalid certificate error will be thrown (issue
#12200).

The ``service_url`` parameter defaults to 's3.amazonaws.com'. It specifies the
base url to use for accessing S3.

The ``kms_keyid`` parameter is optional. It specifies the ID of the Key
Management Service (KMS) master key that was used to encrypt the object.

The ``s3_cache_expire`` parameter defaults to 30s. It specifies expiration
time of S3 metadata cache file.

The ``s3_sync_on_update`` parameter defaults to True. It specifies if cache
is synced on update rather than jit.

The ``path_style`` parameter defaults to False. It specifies whether to use
path style requests or dns style requests

The ``https_enable`` parameter defaults to True. It specifies whether to use
https protocol or http protocol

This pillar can operate in two modes, single environment per bucket or multiple
environments per bucket.

Single environment mode must have this bucket structure:

.. code-block:: text

    s3://<bucket name>/<prefix>/<files>

Multiple environment mode must have this bucket structure:

.. code-block:: text

    s3://<bucket name>/<prefix>/<environment>/<files>

If you wish to define your pillar data entirely within S3 it's recommended
that you use the `prefix=` parameter and specify one entry in ext_pillar
for each environment rather than specifying multiple_env. This is due
to issue #22471 (https://github.com/saltstack/salt/issues/22471)
i(tabsolute_importtprint_functiontunicode_literalsN(tdeepcopy(tsix(tfilter(tquote(tPillart
S3CredentialscB@s eZeddeed�ZRS(c

C@sU||_||_||_||_||_||_||_||_|	|_dS(N(	tkeytkeyidt	kms_keyidtbuckettservice_urlt
verify_ssltlocationt
path_stylethttps_enable(
tselfR	R
RR
RRRRR((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt__init__ts								N(t__name__t
__module__tTruetNonetFalseR(((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyRssubaseuic
C@st||||
|||||�	}tjjtjjt�||��}|	rutjjtjj||	��}ntdj|g�|gkr�iSt|||||	|�}|
rnt	j
d�x�tj|�D]�\}}xutjt
|��D]^\}}xO|D]G}t|||�}t	j
d|||�t||||||�qWq�Wq�Wt	j
d�ntt�}|r�tjj||�gn|g|d|<g|dD]}d|kr�|^q�|d<t|t||�}|jdt�}|S(u7
    Execute a command and read the output as YAML
    upillar_rootsu%Syncing local pillar cache from S3...u%s - %s : %su*Sync local pillar cache from S3 completed.u
ext_pillarus3text(Rtostpathtnormpathtjoint_get_cache_dirt__opts__tgett_inittlogtinfoRt	iteritemst_find_filest_get_cached_file_namet_get_file_from_s3RRt
__grains__tcompile_pillarR(t	minion_idtpillarRR	R
RRtmultiple_envtenvironmenttprefixR
Rts3_cache_expirets3_sync_on_updateRRts3_credst
pillar_dirtmetadatatsaltenvtenv_metatfilest	file_pathtcached_file_pathtoptstxtpiltcompiled_pillar((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt
ext_pillar�s4$
"
		/-cC@s�t||�}tj�|}tjj|�rFtjj|�}nd}||k}	tjd||	rpdnd|||�|	r�t|||||�}
nt	|�}
tjd|
�|
S(ux
    Connect to S3 and download the metadata for each file in all buckets
    specified and cache the data to disk.
    iuDS3 bucket cache file %s is %sexpired, mtime_diff=%ss, expiration=%ssuunot uS3 bucket retrieved pillars %s(
t_get_buckets_cache_filenamettimeRRtisfiletgetmtimeR"tdebugt_refresh_buckets_cache_filet_read_buckets_cache_file(tcredsRR,R-R.R/t
cache_filetexptcache_file_mtimetexpiredtpillars((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR!�s$cC@sLtjjtdd�}tjj|�sHtjd�tj|�n|S(uI
    Get pillar cache directory. Initialize it if it does not exist.
    ucachedirupillar_s3fsuInitializing S3 Pillar Cache(RRRRtisdirR"RBtmakedirs(t	cache_dir((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR�s

cC@s\tjjt�|||�}tjjtjj|��sXtjtjj|��n|S(u<
    Return the cached file name for a bucket path file
    (RRRRtexiststdirnameRL(RR4RR7((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR&�scC@sJt�}tjj|�s+tj|�ntjj|dj||��S(ui
    Return the filename of the cache for bucket contents.
    Create the path if it does not exist.
    u{0}-{1}-files.cache(RRRRNRLRtformat(RR.RM((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR>s	c@s���fd�}d�}d�}tjd�i}�j}	|s�tjd�i}
|�}|rJ||�|
|	<|
||<qJn�tjd�|�}|rJ||�}||�}
x�|
D]�}g|D]}|dj|�r�|^q�}||kri||<n|	||kr/g|||	<n|||	c|7<q�Wntjj|�rltj|�ntjd�tj	j
j|d	��}tj
||�Wd
QX|S(ub
    Retrieve the content of all buckets and cache the metadata to the buckets
    cache file
    c@sotdd�jd�jd�jd�jd�jd�jd�jd	td
i�d6d�j	d
�j
�S(Nus3.queryR	R
RRR
RRt
return_bintparamsuprefixRR(t	__utils__R	R
RRR
RRRRR((RER.(s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt
__get_s3_metas
							
	cS@s#g|D]}d|kr|^qS(NuKey((ts3_metatk((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt__get_pillar_files_from_s3_meta#scS@sCg|D],}tjj|d�jdd�d^q}t|�S(NuKeyu/ii(RRROtsplittset(R6RVtenvironments((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt__get_pillar_environments's9u'Refreshing S3 buckets pillar cache fileu"Single environment per bucket modeu$Multiple environment per bucket modeuKeyu$Writing S3 buckets pillar cache fileuwN(R"RBRt
startswithRRR@tremovetsalttutilsR6tfopentpickletdump(RERFR,R-R.RTRWR[R3Rtbucket_filesRUR6RZR4RVt	env_filestfp_((RER.s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyRC
s>		
	
	
	
,

cC@sAtjd�tjjj|d��}tj|�}WdQX|S(u7
    Return the contents of the buckets cache file
    uReading buckets cache fileurbN(R"RBR^R_R6R`Ratload(RFRetdata((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyRD`s
cC@s�i}x�tj|�D]t\}}||kr;g||<ng|D]}|d^qB}||cg|D]}|jd�si|^qi7<qW|S(uA
    Looks for all the files in the S3 bucket cache metadata
    uKeyu/(RR$tendswith(R3tretRRgRVt	filePaths((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR%ms
6cC@s�||kr||ni}||kr2||ni}tttd�|���}x.|D]&}d|kr]|d|kr]|Sq]WdS(uA
    Looks for a file's metadata in the S3 bucket cache file
    cS@s
d|kS(NuKey((RV((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt<lambda>�suKeyN(tlistR(R3RR4RR5tbucket_metat
files_metat	item_meta((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt_find_file_meta�s
c	C@stjj|�r�t||||�}|rRdjtttj|d���nd}t
jjj
|d�}tjd|||�||kr�dSntdd|jd|jd	|jd
|d|jdt|�d
|d|jd|jd|jd|j�dS(uw
    Checks the local cache for the file, if it's old or missing go grab the
    file from S3 and update the cache
    uuETagumd5u%Cached file: path=%s, md5=%s, etag=%sNus3.queryR	R
RRR
Rt
local_fileRRRR(RRR@RpRRlRtstrtisalnumRR^R_t	hashutilstget_hashR"RBRSR	R
RR
t_quoteRRRR(	RER3R4RRR8t	file_metatfile_md5t
cached_md5((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyR'�s(1	

							()t__doc__t
__future__RRRtloggingRR?RatcopyRtsalt.extRtsalt.ext.six.movesRtsalt.ext.six.moves.urllib.parseRRvtsalt.pillarRtsalt.utils.filesR^tsalt.utils.hashutilst	getLoggerRR"tobjectRRRRR=R!RR&R>RCRDR%RpR'(((s2/usr/lib/python2.7/site-packages/salt/pillar/s3.pyt<module>YsH0	$			
	S	
		

Zerion Mini Shell 1.0