U Wn{go@sdZddlZzddlmZWnddlmZYnXddlmZddlmZm Z m Z m Z ddl m Z ddlmZedZed ejZed ejZed Zzed d dWn"ek redZdZYnBXddlmZedejZdZddlZejd=ddlZe`[edZ edZ!e dZ"e dZ#e dZ$e dZ%e dZ&e dZ'e dZ(e dZ)e dZ*e d Z+e d!Z,e d"Z-e d#Z.e d$Z/e d%Z0e d&Z1e d'Z2e d(Z3e d)Z4e d*Z5e d+Z6e d,Z7e d-Z8e d.Z9e d/Z:e d0Z;e d1Ze d4Z?e d5Z@e d6ZAe d7ZBe d8ZCe d9ZDe d:ZEe d;ZFe d<ZGe d=ZHe d>ZIe d?ZJe d@ZKe dAZLe dBZMe dCZNe dDZOe dEZPe dFZQe dGZRe"e:e&e)e2e1e5e;e-e7e.e8e,e6e(e3e*e+e/e0e#e'e$e4e%e9dHZSeTdIdJe eSDZUeVeSeVeUkstWdKedLdMXdNdOeYeSdPdQdRDZZe[eHeJeIeevalz[a-zA-Z_][a-zA-Z0-9_]*F) _identifierz[\w{0}]+Tzjinja2._identifierz(?z>=srDzoperators droppedz(%s)r<ccs|]}t|VqdSN)reescaper@xr?r?rC srJcCs t| SrE)lenrIr?r?rCrM)keycCsL|tkrt|Stdtdtdtdtdtdtdtdt dt d t d t d i ||S) Nzbegin of commentzend of commentr$zbegin of statement blockzend of statement blockzbegin of print statementzend of print statementzbegin of line statementzend of line statementztemplate data / textzend of template)reverse_operatorsTOKEN_COMMENT_BEGINTOKEN_COMMENT_END TOKEN_COMMENTTOKEN_LINECOMMENTTOKEN_BLOCK_BEGINTOKEN_BLOCK_ENDTOKEN_VARIABLE_BEGINTOKEN_VARIABLE_ENDTOKEN_LINESTATEMENT_BEGINTOKEN_LINESTATEMENT_END TOKEN_DATA TOKEN_EOFget) token_typer?r?rC_describe_token_types< r_cCs|jdkr|jSt|jS)z#Returns a description of the token.r)typevaluer_)tokenr?r?rCdescribe_tokens rccCs2d|kr&|dd\}}|dkr*|Sn|}t|S)z0Like `describe_token` but for token expressions.r;r)splitr_)exprr`rar?r?rCdescribe_token_exprs rgcCstt|S)zsCount the number of newline characters in the string. This is useful for extensions that filter a stream. )rK newline_refindall)rar?r?rCcount_newlinessrjcCstj}t|jd||jft|jd||jft|jd||jfg}|jdk rp|t|jdd||jf|jdk r|t|jdd||jfd d t |d d DS) zACompiles all the rules from the environment into a list of rules.r$blockvariableNZ linestatementz ^[ \t\v]*r'z(?:^|(?<=\S))[^\S\r\n]*cSsg|]}|ddqS)rdNr?rHr?r?rCrDsz!compile_rules..T)reverse) rFrGrKcomment_start_stringblock_start_stringvariable_start_stringline_statement_prefixappendline_comment_prefixsorted) environmenterulesr?r?rC compile_ruless,      rxc@s$eZdZdZefddZddZdS)FailurezjClass that raises a `TemplateSyntaxError` if called. Used by the `Lexer` to specify known errors. cCs||_||_dSrE)message error_class)selfrzclsr?r?rC__init__szFailure.__init__cCs||j||dSrE)r{rz)r|linenofilenamer?r?rC__call__szFailure.__call__N)__name__ __module__ __qualname____doc__rr~rr?r?r?rCrys ryc@sTeZdZdZdZddedD\ZZZddZ dd Z d d Z d d Z ddZ dS)Tokenz Token class.r?ccs|]}tt|VqdSrE)propertyrrHr?r?rCrJszToken.cCst||tt||fSrE)tuple__new__rstr)r}rr`rar?r?rCrsz Token.__new__cCs*|jtkrt|jS|jdkr$|jS|jS)Nr)r`rPrar|r?r?rC__str__s    z Token.__str__cCs2|j|krdSd|kr.|dd|j|jgkSdS)zTest a token against a token expression. This can either be a token type or ``'token_type:token_value'``. This can only test against string values and types. Tr;rdF)r`rerar|rfr?r?rCtests  z Token.testcGs|D]}||rdSqdS)z(Test against multiple token expressions.TF)r)r|iterablerfr?r?rCtest_any s zToken.test_anycCsd|j|j|jfS)NzToken(%r, %r, %r))rr`rarr?r?rC__repr__s zToken.__repr__N)rrrr __slots__rangerr`rarrrrrr?r?r?rCrs rc@s(eZdZdZddZddZddZdS) TokenStreamIteratorz`The iterator for tokenstreams. Iterate over the stream until the eof token is reached. cCs ||_dSrE)stream)r|rr?r?rCr~szTokenStreamIterator.__init__cCs|SrEr?rr?r?rC__iter__!szTokenStreamIterator.__iter__cCs0|jj}|jtkr"|jtt|j|SrE)rcurrentr`r\close StopIterationnextr|rbr?r?rC__next__$s    zTokenStreamIterator.__next__N)rrrrr~rrr?r?r?rCrsrc@s~eZdZdZddZddZddZeZedd d d Z d d Z ddZ dddZ ddZ ddZddZddZddZdS) TokenStreamzA token stream is an iterable that yields :class:`Token`\s. The parser however does not iterate over it but calls :meth:`next` to go one token ahead. The current active token is stored as :attr:`current`. cCs>t||_t|_||_||_d|_tdtd|_ t |dS)NFrd) iter_iterr_pushedrrclosedr TOKEN_INITIALrr)r| generatorrrr?r?rCr~4s zTokenStream.__init__cCst|SrE)rrr?r?rCr=szTokenStream.__iter__cCst|jp|jjtk SrE)boolrrr`r\rr?r?rC__bool__@szTokenStream.__bool__cCs| SrEr?rLr?r?rCrMDrNzTokenStream.z Are we at the end of the stream?)doccCs|j|dS)z Push a token back to the stream.N)rrrrr?r?rCpushFszTokenStream.pushcCs"t|}|j}||||_|S)zLook at the next token.)rrr)r|Z old_tokenresultr?r?rClookJs  zTokenStream.lookrdcCst|D] }t|qdS)zGot n tokens ahead.N)rr)r|nrIr?r?rCskipRs zTokenStream.skipcCs|j|rt|SdS)zqPerform the token test and return the token if it matched. Otherwise the return value is `None`. N)rrrrr?r?rCnext_ifWs zTokenStream.next_ifcCs||dk S)z8Like :meth:`next_if` but only returns `True` or `False`.N)rrr?r?rCskip_if^szTokenStream.skip_ifcCsX|j}|jr|j|_n:|jjtk rTzt|j|_Wntk rR|YnX|S)z|Go one token ahead and return the old one. Use the built-in :func:`next` instead of calling this directly. ) rrpopleftr`r\rrrr)r|rvr?r?rCrbs zTokenStream.__next__cCs"t|jjtd|_d|_d|_dS)zClose the stream.rNT)rrrr\rrrr?r?rCrqszTokenStream.closecCsx|j|s^t|}|jjtkr:td||jj|j|jtd|t |jf|jj|j|jz |jWSt |XdS)z}Expect a given token type and return it. This accepts the same argument as :meth:`jinja2.lexer.Token.test`. z(unexpected end of template, expected %r.zexpected token %r, got %rN) rrrgr`r\rrrrrcrrr?r?rCexpectws(    zTokenStream.expectN)rd)rrrrr~rrZ __nonzero__rZeosrrrrrrrrr?r?r?rCr-s  rc CsZ|j|j|j|j|j|j|j|j|j|j |j |j f }t |}|dkrVt|}|t |<|S)z(Return a lexer which is probably cached.N)roblock_end_stringrpvariable_end_stringrncomment_end_stringrqrs trim_blocks lstrip_blocksnewline_sequencekeep_trailing_newline _lexer_cacher]Lexer)rurOZlexerr?r?rC get_lexers$ rc@s>eZdZdZddZddZd ddZdd d Zdd d ZdS)ra Class that implements a lexer for a given environment. Automatically created by the environment class, usually you don't have to do that. Note that the lexer is not automatically bound to an environment. Multiple environments can share the same lexer. csdd}tj}ttdfttdfttdftt dft t dft t dfg}t|}|jrTdpVd}i|jr\|d}|d||j}||j} || rd|| dpd7}||j} || rd|| dpd7}|d||j} | |j} | r d || dp d} d } d | ||j|||jf} d | ||j| ||jf}| d <|d<nd||j} |j|_|j|_d|ddd||j| ||j||jfgfdd|Dtdfdf|dtdfgt|d||j||j|fttfdf|dtdfdfgt |d||j||j|ft!dfg|t"|d||j#||j#ft$dfg|t%|d||j| ||j||j|ftt&fdf|dtdfdfgt'|d t(dfg|t)|d!t*t+fdfgi|_,dS)"NcSst|tjtjBSrE)rFcompileMSrLr?r?rCrMrNz Lexer.__init__..z\n?rr+z^%s(.*)z|%srdz(?!%s)z^[ \t]*z%s%s(?!%s)|%s\+?z %s%s%s|%s\+?rkr$z%srootz (.*?)(?:%s)r<z4(?P(?:\s*%s\-|%s)\s*raw\s*(?:\-%s\s*|%s))c s&g|]\}}d||||fqS)z(?P<%s_begin>\s*%s\-|%s))r])r@rrZ prefix_rer?rCrDsz"Lexer.__init__..#bygroupz.+z(.*?)((?:\-%s\s*|%s)%s)#popz(.)zMissing end of comment tagz(?:\-%s\s*|%s)%sz \-%s\s*|%sz1(.*?)((?:\s*%s\-|%s)\s*endraw\s*(?:\-%s\s*|%s%s))zMissing end of raw directivez \s*(\n|$)z(.*?)()(?=\n|$))-rFrG whitespace_reTOKEN_WHITESPACEfloat_re TOKEN_FLOAT integer_re TOKEN_INTEGERname_re TOKEN_NAME string_re TOKEN_STRING operator_reTOKEN_OPERATORrxrrromatchrngrouprprrjoinrr[rQrrSrRryrUrVrWrrXTOKEN_RAW_BEGIN TOKEN_RAW_ENDrYrZTOKEN_LINECOMMENT_BEGINrTTOKEN_LINECOMMENT_ENDrw)r|rucrvZ tag_rulesZroot_tag_rulesZblock_suffix_reZ no_lstrip_reZ block_diffmZ comment_diffZno_variable_reZ lstrip_reZblock_prefix_reZcomment_prefix_rer?rrCr~s          zLexer.__init__cCst|j|S)z@Called for strings and template data to normalize it to unicode.)rhrr)r|rar?r?rC_normalize_newlines)szLexer._normalize_newlinesNcCs&|||||}t||||||S)zCCalls tokeniter + tokenize and wraps it in a token stream. ) tokeniterrwrap)r|sourcerrstaterr?r?rCtokenize-szLexer.tokenizec csb|D]V\}}}|tkrqn0|dkr.d}n |dkr>d}n|dkrLqn|dkr`||}n|dkrn|}n|dkrt|}tr|std |||n|d krz$||d d d dd}WnHtk r}z(t| dd  }t||||W5d}~XYnXn:|dkr(t |}n&|dkr|dkr| dnv|dkr| dn`|dkr| dnJ|dkr>| std||||| }||kr>td||f||||sN|tkrZ|||fV||d7}|}|dk r|dkr| nT|d krt|D]$\}}|dk r| |qqtd| n | ||j| d } n||krtd| |}qq|| kr dStd|||f|||qdS)zThis method tokenizes the text and returns the tokens in a generator. Use this method if you just want to tokenize a template. )z   rrrrdrN)rlrkz invalid stateZ_beginr)r!r r&rz?%r wanted to resolve the token dynamically but no group matchedrr5r6r3r4r1r2)r6r4r2zunexpected '%s'zunexpected '%s', expected '%s'rzC%r wanted to resolve the new state dynamically but no group matchedz,%r yielded empty string without stack changezunexpected char %r at %d)r splitlinesrendswithrrrAssertionErrorrwrKr isinstancer enumerate __class__ryr groupdictcount RuntimeErrorrignore_if_emptyrpopend)r|rrrrlinesnewlineposrstackZ statetokensZ source_lengthZbalancing_stackZregextokensZ new_stateridxrbrOrar(Z expected_opZpos2r?r?rCr\s                                  zLexer.tokeniter)NNN)NN)NN) rrrrr~rrrrr?r?r?rCrs  )r)krrFZcollections.abcr collectionsrrZjinja2._compatrrrrZjinja2.exceptionsrZ jinja2.utilsr rrUrrrr SyntaxErrorrrZjinja2r formatpatternsysmodulesrrhZ TOKEN_ADDZ TOKEN_ASSIGNZ TOKEN_COLONZ TOKEN_COMMAZ TOKEN_DIVZ TOKEN_DOTZTOKEN_EQZTOKEN_FLOORDIVZTOKEN_GTZ TOKEN_GTEQZ TOKEN_LBRACEZTOKEN_LBRACKETZ TOKEN_LPARENZTOKEN_LTZ TOKEN_LTEQZ TOKEN_MODZ TOKEN_MULZTOKEN_NEZ TOKEN_PIPEZ TOKEN_POWZ TOKEN_RBRACEZTOKEN_RBRACKETZ TOKEN_RPARENZTOKEN_SEMICOLONZ TOKEN_SUBZ TOKEN_TILDErrrrrrrUrVrWrXrrrQrRrSrYrZrrrTr[rr\rdictrPrKrrrtr frozensetrrr_rcrgrjrxobjectryrrrrrrr?r?r?rCs            +^