%PDF- %PDF-
Mini Shell

Mini Shell

Direktori : /opt/alt/python37/lib/python3.7/site-packages/jinja2/__pycache__/
Upload File :
Create Path :
Current File : //opt/alt/python37/lib/python3.7/site-packages/jinja2/__pycache__/lexer.cpython-37.pyc

B

��R�n�@s�dZddlZddlmZddlmZddlmZddlm	Z	ddl
mZmZm
Z
mZmZe	d�Ze�d	ej�Ze�d
ej�Ze�d�Zyedd
d�Wnek
r�e�d�ZYn$XddlmZe�dejejf�Ze�d�Ze�d�Zed�Zed�Z ed�Z!ed�Z"ed�Z#ed�Z$ed�Z%ed�Z&ed�Z'ed�Z(ed�Z)ed�Z*ed �Z+ed!�Z,ed"�Z-ed#�Z.ed$�Z/ed%�Z0ed&�Z1ed'�Z2ed(�Z3ed)�Z4ed*�Z5ed+�Z6ed,�Z7ed-�Z8ed.�Z9ed/�Z:ed0�Z;ed1�Z<ed2�Z=ed3�Z>ed4�Z?ed5�Z@ed6�ZAed7�ZBed8�ZCed9�ZDed:�ZEed;�ZFed<�ZGed=�ZHed>�ZIed?�ZJed@�ZKedA�ZLedB�ZMedC�ZNedD�ZOee7e#e&e/e.e2e8e*e4e+e5e)e3e%e0e'e(e,e-e e$e!e1e"e6dE�ZPeQdFdG�eeP�D��ZReSeP�eSeR�k�s�tTdH��e�dIdJ�UdKdL�eVePdMdN�dO�D���ZWeXeEeGeFe9e9eJeKeLg�ZYeXe9eMeGeLg�ZZdPdQ�Z[dRdS�Z\dTdU�Z]dVdW�Z^dXdY�Z_GdZd[�d[e`�ZaGd\d]�d]eb�Zce
Gd^d_�d_e`��Zde
Gd`da�dae`��Zedbdc�ZfGddde�dee`�ZgdS)fa�
    jinja2.lexer
    ~~~~~~~~~~~~

    This module implements a Jinja / Python combination lexer. The
    `Lexer` class provided by this module is used to do some preprocessing
    for Jinja.

    On the one hand it filters out invalid operators like the bitshift
    operators we don't allow in templates. On the other hand it separates
    template code and python code in expressions.

    :copyright: (c) 2010 by the Jinja Team.
    :license: BSD, see LICENSE for more details.
�N)�
itemgetter)�deque)�TemplateSyntaxError)�LRUCache)�next�	iteritems�implements_iterator�	text_type�intern�2z\s+z7('([^'\\]*(?:\\.[^'\\]*)*)'|"([^"\\]*(?:\\.[^"\\]*)*)")z\d+ufööz	<unknown>�evalz\b[a-zA-Z_][a-zA-Z0-9_]*\b)�_stringdefsz	[%s][%s]*z(?<!\.)\d+\.\d+z(\r\n|\r|\n)�addZassignZcolonZcommaZdiv�dot�eq�floordiv�gtZgteqZlbraceZlbracketZlparen�ltZlteq�mod�mul�ne�pipe�powZrbraceZrbracketZrparenZ	semicolon�sub�tildeZ
whitespace�float�integer�name�string�operator�block_begin�	block_endZvariable_begin�variable_end�	raw_begin�raw_endZ
comment_beginZcomment_end�comment�linestatement_begin�linestatement_endZlinecomment_beginZlinecomment_end�linecomment�data�initial�eof)�+�-�/z//�*�%z**�~�[�]�(�)�{�}z==z!=�>z>=�<z<=�=�.�:�|�,�;cCsg|]\}}||f�qS�r@)�.0�k�vr@r@�=/opt/alt/python37/lib/python3.7/site-packages/jinja2/lexer.py�
<listcomp>�srEzoperators droppedz(%s)r=ccs|]}t�|�VqdS)N)�re�escape)rA�xr@r@rD�	<genexpr>�srIcCs
t|�S)N)�len)rHr@r@rD�<lambda>��rK)�keycCsL|tkrt|Stdtdtdtdtdtdtdtdt	dt
d	td
tdi�
||�S)Nzbegin of commentzend of commentr%zbegin of statement blockzend of statement blockzbegin of print statementzend of print statementzbegin of line statementzend of line statementztemplate data / textzend of template)�reverse_operators�TOKEN_COMMENT_BEGIN�TOKEN_COMMENT_END�
TOKEN_COMMENT�TOKEN_LINECOMMENT�TOKEN_BLOCK_BEGIN�TOKEN_BLOCK_END�TOKEN_VARIABLE_BEGIN�TOKEN_VARIABLE_END�TOKEN_LINESTATEMENT_BEGIN�TOKEN_LINESTATEMENT_END�
TOKEN_DATA�	TOKEN_EOF�get)�
token_typer@r@rD�_describe_token_type�sr]cCs|jdkr|jSt|j�S)z#Returns a description of the token.r)�type�valuer])�tokenr@r@rD�describe_token�s
racCs2d|kr&|�dd�\}}|dkr*|Sn|}t|�S)z0Like `describe_token` but for token expressions.r<�r)�splitr])�exprr^r_r@r@rD�describe_token_expr�srecCstt�|��S)zsCount the number of newline characters in the string.  This is
    useful for extensions that filter a stream.
    )rJ�
newline_re�findall)r_r@r@rD�count_newlines�srhcCs�tj}t|j�d||j�ft|j�d||j�ft|j�d||j�fg}|jdk	rp|�t|j�dd||j�f�|jdk	r�|�t|j�dd||j�f�d	d
�t	|dd�D�S)
zACompiles all the rules from the environment into a list of rules.r%�block�variableNZ
linestatementz	^[ \t\v]*r(z(?:^|(?<=\S))[^\S\r\n]*cSsg|]}|dd��qS)rbNr@)rArHr@r@rDrE�sz!compile_rules.<locals>.<listcomp>T)�reverse)
rFrGrJ�comment_start_string�block_start_string�variable_start_string�line_statement_prefix�append�line_comment_prefix�sorted)�environment�e�rulesr@r@rD�
compile_rules�s






rvc@s$eZdZdZefdd�Zdd�ZdS)�FailurezjClass that raises a `TemplateSyntaxError` if called.
    Used by the `Lexer` to specify known errors.
    cCs||_||_dS)N)�message�error_class)�selfrx�clsr@r@rD�__init__�szFailure.__init__cCs|�|j||��dS)N)ryrx)rz�lineno�filenamer@r@rD�__call__�szFailure.__call__N)�__name__�
__module__�__qualname__�__doc__rr|rr@r@r@rDrw�srwc@sTeZdZdZdZdd�ed�D�\ZZZdd�Z	dd	�Z
d
d�Zdd
�Zdd�Z
dS)�TokenzToken class.r@ccs|]}tt|��VqdS)N)�propertyr)rArHr@r@rDrI�szToken.<genexpr>�cCst�||tt|��|f�S)N)�tuple�__new__r
�str)r{r}r^r_r@r@rDr��sz
Token.__new__cCs*|jtkrt|jS|jdkr$|jS|jS)Nr)r^rNr_)rzr@r@rD�__str__�s



z
Token.__str__cCs2|j|krdSd|kr.|�dd�|j|jgkSdS)z�Test a token against a token expression.  This can either be a
        token type or ``'token_type:token_value'``.  This can only test
        against string values and types.
        Tr<rbF)r^rcr_)rzrdr@r@rD�test�s

z
Token.testcGs x|D]}|�|�rdSqWdS)z(Test against multiple token expressions.TF)r�)rz�iterablerdr@r@rD�test_any�s

zToken.test_anycCsd|j|j|jfS)NzToken(%r, %r, %r))r}r^r_)rzr@r@rD�__repr__szToken.__repr__N)r�r�r�r��	__slots__�ranger}r^r_r�r�r�r�r�r@r@r@rDr��s
r�c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�TokenStreamIteratorz`The iterator for tokenstreams.  Iterate over the stream
    until the eof token is reached.
    cCs
||_dS)N)�stream)rzr�r@r@rDr|szTokenStreamIterator.__init__cCs|S)Nr@)rzr@r@rD�__iter__szTokenStreamIterator.__iter__cCs0|jj}|jtkr"|j��t��t|j�|S)N)r��currentr^rZ�close�
StopIterationr)rzr`r@r@rD�__next__s


zTokenStreamIterator.__next__N)r�r�r�r�r|r�r�r@r@r@rDr�sr�c@s~eZdZdZdd�Zdd�Zdd�ZeZedd	�d
d�Z	dd
�Z
dd�Zddd�Zdd�Z
dd�Zdd�Zdd�Zdd�ZdS)�TokenStreamz�A token stream is an iterable that yields :class:`Token`\s.  The
    parser however does not iterate over it but calls :meth:`next` to go
    one token ahead.  The current active token is stored as :attr:`current`.
    cCs>t|�|_t�|_||_||_d|_tdtd�|_	t
|�dS)NFrb�)�iter�_iterr�_pushedrr~�closedr��
TOKEN_INITIALr�r)rz�	generatorrr~r@r@rDr|(s
zTokenStream.__init__cCst|�S)N)r�)rzr@r@rDr�1szTokenStream.__iter__cCst|j�p|jjtk	S)N)�boolr�r�r^rZ)rzr@r@rD�__bool__4szTokenStream.__bool__cCs|S)Nr@)rHr@r@rDrK8rLzTokenStream.<lambda>z Are we at the end of the stream?)�doccCs|j�|�dS)z Push a token back to the stream.N)r�rp)rzr`r@r@rD�push:szTokenStream.pushcCs"t|�}|j}|�|�||_|S)zLook at the next token.)rr�r�)rzZ	old_token�resultr@r@rD�look>s

zTokenStream.lookrbcCsxt|�D]}t|�q
WdS)zGot n tokens ahead.N)r�r)rz�nrHr@r@rD�skipFszTokenStream.skipcCs|j�|�rt|�SdS)zqPerform the token test and return the token if it matched.
        Otherwise the return value is `None`.
        N)r�r�r)rzrdr@r@rD�next_ifKszTokenStream.next_ifcCs|�|�dk	S)z8Like :meth:`next_if` but only returns `True` or `False`.N)r�)rzrdr@r@rD�skip_ifRszTokenStream.skip_ifcCsX|j}|jr|j��|_n:|jjtk	rTyt|j�|_Wntk
rR|��YnX|S)z)Go one token ahead and return the old one)	r�r��popleftr^rZrr�r�r�)rz�rvr@r@rDr�VszTokenStream.__next__cCs"t|jjtd�|_d|_d|_dS)zClose the stream.r�NT)r�r�r}rZr�r�)rzr@r@rDr�bszTokenStream.closecCst|j�|�s^t|�}|jjtkr:td||jj|j|j��td|t	|j�f|jj|j|j��z|jSt
|�XdS)z}Expect a given token type and return it.  This accepts the same
        argument as :meth:`jinja2.lexer.Token.test`.
        z(unexpected end of template, expected %r.zexpected token %r, got %rN)r�r�rer^rZrr}rr~rar)rzrdr@r@rD�expecthszTokenStream.expectN)rb)r�r�r�r�r|r�r�Z__nonzero__r�Zeosr�r�r�r�r�r�r�r�r@r@r@rDr�!s	
r�cCsZ|j|j|j|j|j|j|j|j|j|j	|j
|jf}t�
|�}|dkrVt|�}|t|<|S)z(Return a lexer which is probably cached.N)rm�block_end_stringrn�variable_end_stringrl�comment_end_stringrorq�trim_blocks�
lstrip_blocks�newline_sequence�keep_trailing_newline�_lexer_cacher[�Lexer)rsrMZlexerr@r@rD�	get_lexer}s"
r�c@s>eZdZdZdd�Zdd�Zd
dd�Zdd	d
�Zddd�ZdS)r�a
Class that implements a lexer for a given environment. Automatically
    created by the environment class, usually you don't have to do that.

    Note that the lexer is not automatically bound to an environment.
    Multiple environments can share the same lexer.
    cs�dd�}tj}ttdfttdfttdftt	dft
tdftt
dfg}t|�}|jrTdpVd}i�|j�r\|d�}|d||j��}|�|j�}	||	r�d||	�d��p�d7}|�|j�}	||	r�d||	�d��p�d7}|d||j��}
|
�|j�}	|	�r
d	||	�d���pd}d
}d|||j�|||j�f}
d|||j�|||j�f}|
�d
<|�d<nd||j�}
|j|_|j|_d|dd�d||j�|
||j�||j�fg�fdd�|D���tdfdf|d�tdfgt|d||j�||j�|f�ttfdf|d�td�fdfgt |d||j�||j�|f�t!dfg|t"|d||j#�||j#�f�t$dfg|t%|d||j�|
||j�||j�|f�tt&fdf|d�td�fdfgt'|d �t(dfg|t)|d!�t*t+fdfgi|_,dS)"NcSst�|tjtjB�S)N)rF�compile�M�S)rHr@r@rDrK�rLz Lexer.__init__.<locals>.<lambda>z\n?r�r,z^%s(.*)z|%srbz(?!%s)z^[ \t]*z%s%s(?!%s)|%s\+?z%s%s%s|%s\+?rir%z%s�rootz(.*?)(?:%s)r=z4(?P<raw_begin>(?:\s*%s\-|%s)\s*raw\s*(?:\-%s\s*|%s))c	s&g|]\}}d||��||�f�qS)z(?P<%s_begin>\s*%s\-|%s))r[)rAr��r)�	prefix_rer@rDrE�sz"Lexer.__init__.<locals>.<listcomp>z#bygroupz.+z(.*?)((?:\-%s\s*|%s)%s)z#popz(.)zMissing end of comment tagz(?:\-%s\s*|%s)%sz
\-%s\s*|%sz1(.*?)((?:\s*%s\-|%s)\s*endraw\s*(?:\-%s\s*|%s%s))zMissing end of raw directivez	\s*(\n|$)z(.*?)()(?=\n|$))-rFrG�
whitespace_re�TOKEN_WHITESPACE�float_re�TOKEN_FLOAT�
integer_re�
TOKEN_INTEGER�name_re�
TOKEN_NAME�	string_re�TOKEN_STRING�operator_re�TOKEN_OPERATORrvr�r�rm�matchrl�grouprnr�r��joinr�rYrOr�rQrPrwrSrTrUr�rV�TOKEN_RAW_BEGIN�
TOKEN_RAW_ENDrWrX�TOKEN_LINECOMMENT_BEGINrR�TOKEN_LINECOMMENT_ENDru)rzrs�crtZ	tag_rulesZroot_tag_rulesZblock_suffix_reZno_lstrip_reZ
block_diff�mZcomment_diffZno_variable_reZ	lstrip_reZblock_prefix_reZcomment_prefix_rer@)r�rDr|�s�	




zLexer.__init__cCst�|j|�S)z@Called for strings and template data to normalize it to unicode.)rfrr�)rzr_r@r@rD�_normalize_newlinesszLexer._normalize_newlinesNcCs&|�||||�}t|�|||�||�S)zCCalls tokeniter + tokenize and wraps it in a token stream.
        )�	tokeniterr��wrap)rz�sourcerr~�stater�r@r@rD�tokenizeszLexer.tokenizec	csp�xh|D�]^\}}}|tkr"q�n8|dkr2d}�n(|dkrBd}�n|dkrPq�n
|dkrd|�|�}n�|dkrr|}n�|dkr�t|�}n�|d	k�r y$|�|d
d���dd
��d�}WnFtk
r�}z(t|��d�d��}t||||��Wdd}~XYnXyt|�}Wnt	k
�rYnXn:|dk�r4t
|�}n&|dk�rHt|�}n|dk�rZt|}t
|||�VqWdS)z�This is called with the stream as returned by `tokenize` and wraps
        every token in a :class:`Token` and converts the value.
        r&r r'r!)r#r$r)�keywordrrrb����ascii�backslashreplacezunicode-escaper<Nrrr)�ignored_tokensr�r��encode�decode�	Exceptionrc�stripr�UnicodeError�intr�	operatorsr�)	rzr�rr~r}r`r_rt�msgr@r@rDr�$sD


 




z
Lexer.wrapccsPt|�}|��}|jr>|r>x"dD]}|�|�r |�d�Pq Wd�|�}d}d}dg}	|dk	r�|dkr�|dksvtd	��|	�|d
�nd}|j|	d}
t|�}g}�x��x�|
D�]j\}
}}|
�	||�}|dkr�q�|r�|dkr�q�t
|t��r�x�t|�D]�\}}|j
tk�r|||��q�|d
k�rpx�t|���D]0\}}|dk	�r.|||fV||�d�7}P�q.Wtd|
��q�|�|d�}|�s�|tk�r�|||fV||�d�7}q�Wn�|��}|dk�rL|dk�r�|�d�nv|dk�r�|�d�n`|dk�r|�d�nJ|dk�rL|�s$td||||��|��}||k�rLtd||f|||��|�s\|tk�rh|||fV||�d�7}|��}|dk	�r|dk�r�|	��nT|d
k�r�xHt|���D] \}}|dk	�r�|	�|�P�q�Wtd|
��n
|	�|�|j|	d}
n||k�rtd|
��|}Pq�W||k�r.dStd|||f|||��q�WdS)z�This method tokenizes the text and returns the tokens in a
        generator.  Use this method if you just want to tokenize a template.
        )z
�
�
r�r�rrbr�N)rjriz
invalid stateZ_beginr�)r"r!r'z#bygroupz?%r wanted to resolve the token dynamically but no group matchedrr6r7r4r5r2r3)r7r5r3zunexpected '%s'zunexpected '%s', expected '%s'z#popzC%r wanted to resolve the new state dynamically but no group matchedz,%r yielded empty string without stack changezunexpected char %r at %d)r	�
splitlinesr��endswithrpr��AssertionErrorrurJr��
isinstancer��	enumerate�	__class__rwr�	groupdict�count�RuntimeErrorr��ignore_if_emptyr�pop�end)rzr�rr~r��lines�newline�posr}�stackZstatetokensZ
source_lengthZbalancing_stackZregex�tokensZ	new_stater��idxr`rMr_r)Zexpected_opZpos2r@r@rDr�Qs�























zLexer.tokeniter)NNN)NN)NN)	r�r�r�r�r|r�r�r�r�r@r@r@rDr��s

-r�)hr�rFrr�collectionsrZjinja2.exceptionsrZjinja2.utilsrZjinja2._compatrrrr	r
r�r��Ur�r�r�r��SyntaxErrorr�Zjinja2r
Z	xid_startZxid_continuer�rfZ	TOKEN_ADDZTOKEN_ASSIGNZTOKEN_COLONZTOKEN_COMMAZ	TOKEN_DIVZ	TOKEN_DOTZTOKEN_EQZTOKEN_FLOORDIVZTOKEN_GTZ
TOKEN_GTEQZTOKEN_LBRACEZTOKEN_LBRACKETZTOKEN_LPARENZTOKEN_LTZ
TOKEN_LTEQZ	TOKEN_MODZ	TOKEN_MULZTOKEN_NEZ
TOKEN_PIPEZ	TOKEN_POWZTOKEN_RBRACEZTOKEN_RBRACKETZTOKEN_RPARENZTOKEN_SEMICOLONZ	TOKEN_SUBZTOKEN_TILDEr�r�r�r�r�r�rSrTrUrVr�r�rOrPrQrWrXr�r�rRrYr�rZr��dictrNrJr�r�rrr��	frozensetr�r�r]rarerhrv�objectrwr�r�r�r�r�r�r@r@r@rD�<module>s�






+[

Zerion Mini Shell 1.0