lexertl
is a modern, modular lexical analyser generator. See the
wikipedia definition to learn what a lexical analyser is at
http://en.wikipedia.org/wiki/Lexical_analysis.
Traditionally, programs such as
lex
generate source code as their output and even only support one kind of
programming language. With lexertl
I am seeking to offer much more
flexibility than that by exposing the
state machine
that is generated from a supplied lex specification. By doing this the user
has much more freedom in how they process the data, which means it becomes easy to:
lexertl
compiles equally well using GCC as well as Visual C++ 7.1 and above.
Having put off modernising the lexertl
codebase due to the poor
support for C++11 in Visual C++, the 2015 compiler has come along and rendered
such excuses moot. I have therefore made the updated codebase available in the form
of lexertl14
.
The original plan was to go to C++11, but having browsed Effective Modern C++
(as well as numerous articles on the internet) about the virtues of using
std::make_unique()
I have decided to go directly to C++14.
This decision was made easier due to the fact that Visual C++ was one of the first
(if not the first) compiler to support the feature "out of the box".
After years of trying to keep compatibility with VC++ 6, and then when that became impossible, VC++ 2003, I think it's time to embrace the modernisation of C++ and move forward. The orginal C++ 03 codebase is still available for those stuck with older compilers.
lexertl14
is the version I now use at work.
A lexical analyser is a program that takes text as input and outputs substrings it recognizes. Each substring is referred to as a 'token' and has a numeric identifier assigned to it to uniquely identify its type (string, int, keyword etc.). A rule is defined for each token that must be recognised using Regular Expressions.
A good way to learn how regular expressions really work is to use the
lexertl::debug::dump()
function (see a later section on this
page for how). In general, people are so used to Perl style expressions
they can be confused when things like look-ahead assertions are not supported
when using a lexer generator. I will therefore show a couple of examples of subtler
than the average regular expressions.
'(''|[^'])*'Will match SQL Server style strings. The state machine is shown below:
\/\*([^*]|\*+[^/*])*\*+\/Matches a C style comment:
Note that the same state machine can be achieved with the following shorter regex:
\/\*(?s:.)*?\*\/However if you take that approach be careful to declare the pattern near the end of your lexer rules. The abstemious repeat operator is very powerful and if used early in your rules will take precedence over later greedy repeats.
Sequence |
Meaning |
---|---|
\a |
Alert (bell). |
\b |
Backspace. |
\e |
ESC character, x1B. |
\n |
Newline. |
\r |
Carriage return. |
\f |
Form feed, x0C. |
\t |
Horizontal tab, x09. |
\v |
Vertical tab, x0B. |
\octal |
Character specified by a three-digit octal code. |
\xhex |
Character specified by a hexadecimal code. |
\cchar |
Named control character. |
"..." |
All characters taken as literals between double quotes, except escape sequences. |
Sequence |
Meaning |
---|---|
[...] |
A single character listed or contained within a listed range. Ranges can be combined with the {+} and {-} operators. For example [a-z]{+}[0-9] is the same as [0-9a-z] and [a-z]{-}[aeiou] is the same as [b-df-hj-np-tv-z]. |
[^...] |
A single character not listed and not contained within a listed range. |
. |
Any character (defaults to [^\n] but can be
changed to any character by omitting the |
\d |
Digit character ([0-9]). |
\D |
Non-digit character ([^0-9]). |
\s |
Whitespace character ([ \t\n\r\f\v]). |
\S |
Non-whitespace character ([^ \t\n\r\f\v]). |
\w |
Word character ([a-zA-Z0-9_]). |
\W |
Non-word character ([^a-zA-Z0-9_]). |
Sequence |
Meaning |
---|---|
\p{C} |
Other. |
\p{Cc} |
Other, Control. |
\p{Cf} |
Other, Format. |
\p{Co} |
Other, Private Use. |
\p{Cs} |
Other, Surrogate. |
\p{L} |
Letter. |
\p{LC} |
Letter, Cased. |
\p{Ll} |
Letter, Lowercase. |
\p{Lm} |
Letter, Modifier. |
\p{Lo} |
Letter, Other. |
\p{Lt} |
Letter, Titlecase. |
\p{Lu} |
Letter, Uppercase. |
\p{M} |
Mark. |
\p{Mc} |
Mark, Space Combining. |
\p{Me} |
Mark, Enclosing. |
\p{Mn} |
Mark, Nonspacing. |
\p{N} |
Number. |
\p{Nd} |
Number, Decimal Digit. |
\p{Nl} |
Number, Letter. |
\p{No} |
Number, Other. |
\p{P} |
Punctuation. |
\p{Pc} |
Punctuation, Connector. |
\p{Pd} |
Punctuation, Dash. |
\p{Pe} |
Punctuation, Close. |
\p{Pf} |
Punctuation, Final quote. |
\p{Pi} |
Punctuation, Initial quote. |
\p{Po} |
Punctuation, Other. |
\p{Ps} |
Punctuation, Open. |
\p{S} |
Symbol. |
\p{Sc} |
Symbol, Currency. |
\p{Sk} |
Symbol, Modifier. |
\p{Sm} |
Symbol, Math. |
\p{So} |
Symbol, Other. |
\p{Z} |
Separator. |
\p{Zl} |
Separator, Line. |
\p{Zp} |
Separator, Paragraph. |
\p{Zs} |
Separator, Space. |
Sequence |
Meaning |
---|---|
\p{InBasic_Latin} |
Basic Latin. |
\p{InLatin_1_Supplement} |
Latin-1 Supplement. |
\p{InLatin_Extended_A} |
Latin Extended-A. |
\p{InLatin_Extended_B} |
Latin Extended-B. |
\p{InIPA_Extensions} |
IPA Extensions. |
\p{InSpacing_Modifier_Letters} |
Spacing Modifier Letters. |
\p{InCombining_Diacritical_Marks} |
Combining Diacritical Marks. |
\p{InGreek_and_Coptic} |
Greek and Coptic. |
\p{InCyrillic} |
Cyrillic. |
\p{InCyrillic_Supplement} |
Cyrillic Supplement. |
\p{InArmenian} |
Armenian. |
\p{InHebrew} |
Hebrew. |
\p{InArabic} |
Arabic. |
\p{InSyriac} |
Syriac. |
\p{InArabic_Supplement} |
Arabic Supplement. |
\p{InThaana} |
Thaana. |
\p{InNKo} |
NKo. |
\p{InSamaritan} |
Samaritan. |
\p{InMandaic} |
Mandaic. |
\p{InSyriac_Supplement} |
Syriac Supplement. |
\p{InArabic_Extended_B} |
Arabic Extended-B. |
\p{InArabic_Extended_A} |
Arabic Extended-A. |
\p{InDevanagari} |
Devanagari. |
\p{InBengali} |
Bengali. |
\p{InGurmukhi} |
Gurmukhi. |
\p{InGujarati} |
Gujarati. |
\p{InOriya} |
Oriya. |
\p{InTamil} |
Tamil. |
\p{InTelugu} |
Telugu. |
\p{InKannada} |
Kannada. |
\p{InMalayalam} |
Malayalam. |
\p{InSinhala} |
Sinhala. |
\p{InThai} |
Thai. |
\p{InLao} |
Lao. |
\p{InTibetan} |
Tibetan. |
\p{InMyanmar} |
Myanmar. |
\p{InGeorgian} |
Georgian. |
\p{InHangul_Jamo} |
Hangul Jamo. |
\p{InEthiopic} |
Ethiopic. |
\p{InEthiopic_Supplement} |
Ethiopic Supplement. |
\p{InCherokee} |
Cherokee. |
\p{InUnified_Canadian_Aboriginal_Syllabics} |
Unified Canadian Aboriginal Syllabics. |
\p{InOgham} |
Ogham. |
\p{InRunic} |
Runic. |
\p{InTagalog} |
Tagalog. |
\p{InHanunoo} |
Hanunoo. |
\p{InBuhid} |
Buhid. |
\p{InTagbanwa} |
Tagbanwa. |
\p{InKhmer} |
Khmer. |
\p{InMongolian} |
Mongolian. |
\p{InUnified_Canadian_Aboriginal_Syllabics_Extended} |
Unified Canadian Aboriginal Syllabics Extended. |
\p{InLimbu} |
Limbu. |
\p{InTai_Le} |
Tai Le. |
\p{InNew_Tai_Lue} |
New Tai Lue. |
\p{InKhmer_Symbols} |
Khmer Symbols. |
\p{InBuginese} |
Buginese. |
\p{InTai_Tham} |
Tai Tham. |
\p{InCombining_Diacritical_Marks_Extended} |
Combining Diacritical Marks Extended. |
\p{InBalinese} |
Balinese. |
\p{InSundanese} |
Sundanese. |
\p{InBatak} |
Batak. |
\p{InLepcha} |
Lepcha. |
\p{InOl_Chiki} |
Ol Chiki. |
\p{InCyrillic_Extended_C} |
Cyrillic Extended-C. |
\p{InGeorgian_Extended} |
Georgian Extended. |
\p{InSundanese_Supplement} |
Sundanese Supplement. |
\p{InVedic_Extensions} |
Vedic Extensions. |
\p{InPhonetic_Extensions} |
Phonetic Extensions. |
\p{InPhonetic_Extensions_Supplement} |
Phonetic Extensions Supplement. |
\p{InCombining_Diacritical_Marks_Supplement} |
Combining Diacritical Marks Supplement. |
\p{InLatin_Extended_Additional} |
Latin Extended Additional. |
\p{InGreek_Extended} |
Greek Extended. |
\p{InGeneral_Punctuation} |
General Punctuation. |
\p{InSuperscripts_and_Subscripts} |
Superscripts and Subscripts. |
\p{InCurrency_Symbols} |
Currency Symbols. |
\p{InCombining_Diacritical_Marks_for_Symbols} |
Combining Diacritical Marks for Symbols. |
\p{InLetterlike_Symbols} |
Letterlike Symbols. |
\p{InNumber_Forms} |
Number Forms. |
\p{InArrows} |
Arrows. |
\p{InMathematical_Operators} |
Mathematical Operators. |
\p{InMiscellaneous_Technical} |
Miscellaneous Technical. |
\p{InControl_Pictures} |
Control Pictures. |
\p{InOptical_Character_Recognition} |
Optical Character Recognition. |
\p{InEnclosed_Alphanumerics} |
Enclosed Alphanumerics. |
\p{InBox_Drawing} |
Box Drawing. |
\p{InBlock_Elements} |
Block Elements. |
\p{InGeometric_Shapes} |
Geometric Shapes. |
\p{InMiscellaneous_Symbols} |
Miscellaneous Symbols. |
\p{InDingbats} |
Dingbats. |
\p{InMiscellaneous_Mathematical_Symbols_A} |
Miscellaneous Mathematical Symbols-A. |
\p{InSupplemental_Arrows_A} |
Supplemental Arrows-A. |
\p{InBraille_Patterns} |
Braille Patterns. |
\p{InSupplemental_Arrows_B} |
Supplemental Arrows-B. |
\p{InMiscellaneous_Mathematical_Symbols_B} |
Miscellaneous Mathematical Symbols-B. |
\p{InSupplemental_Mathematical_Operators} |
Supplemental Mathematical Operators. |
\p{InMiscellaneous_Symbols_and_Arrows} |
Miscellaneous Symbols and Arrows. |
\p{InGlagolitic} |
Glagolitic. |
\p{InLatin_Extended_C} |
Latin Extended-C. |
\p{InCoptic} |
Coptic. |
\p{InGeorgian_Supplement} |
Georgian Supplement. |
\p{InTifinagh} |
Tifinagh. |
\p{InEthiopic_Extended} |
Ethiopic Extended. |
\p{InCyrillic_Extended_A} |
Cyrillic Extended-A. |
\p{InSupplemental_Punctuation} |
Supplemental Punctuation. |
\p{InCJK_Radicals_Supplement} |
CJK Radicals Supplement. |
\p{InKangxi_Radicals} |
Kangxi Radicals. |
\p{InIdeographic_Description_Characters} |
Ideographic Description Characters. |
\p{InCJK_Symbols_and_Punctuation} |
CJK Symbols and Punctuation. |
\p{InHiragana} |
Hiragana. |
\p{InKatakana} |
Katakana. |
\p{InBopomofo} |
Bopomofo. |
\p{InHangul_Compatibility_Jamo} |
Hangul Compatibility Jamo. |
\p{InKanbun} |
Kanbun. |
\p{InBopomofo_Extended} |
Bopomofo Extended. |
\p{InCJK_Strokes} |
CJK Strokes. |
\p{InKatakana_Phonetic_Extensions} |
Katakana Phonetic Extensions. |
\p{InEnclosed_CJK_Letters_and_Months} |
Enclosed CJK Letters and Months. |
\p{InCJK_Compatibility} |
CJK Compatibility. |
\p{InCJK_Unified_Ideographs_Extension_A} |
CJK Unified Ideographs Extension A. |
\p{InYijing_Hexagram_Symbols} |
Yijing Hexagram Symbols. |
\p{InCJK_Unified_Ideographs} |
CJK Unified Ideographs. |
\p{InYi_Syllables} |
Yi Syllables. |
\p{InYi_Radicals} |
Yi Radicals. |
\p{InLisu} |
Lisu. |
\p{InVai} |
Vai. |
\p{InCyrillic_Extended_B} |
Cyrillic Extended-B. |
\p{InBamum} |
Bamum. |
\p{InModifier_Tone_Letters} |
Modifier Tone Letters. |
\p{InLatin_Extended_D} |
Latin Extended-D. |
\p{InSyloti_Nagri} |
Syloti Nagri. |
\p{InCommon_Indic_Number_Forms} |
Common Indic Number Forms. |
\p{InPhags_pa} |
Phags-pa. |
\p{InSaurashtra} |
Saurashtra. |
\p{InDevanagari_Extended} |
Devanagari Extended. |
\p{InKayah_Li} |
Kayah Li. |
\p{InRejang} |
Rejang. |
\p{InHangul_Jamo_Extended_A} |
Hangul Jamo Extended-A. |
\p{InJavanese} |
Javanese. |
\p{InMyanmar_Extended_B} |
Myanmar Extended-B. |
\p{InCham} |
Cham. |
\p{InMyanmar_Extended_A} |
Myanmar Extended-A. |
\p{InTai_Viet} |
Tai Viet. |
\p{InMeetei_Mayek_Extensions} |
Meetei Mayek Extensions. |
\p{InEthiopic_Extended_A} |
Ethiopic Extended-A. |
\p{InLatin_Extended_E} |
Latin Extended-E. |
\p{InCherokee_Supplement} |
Cherokee Supplement. |
\p{InMeetei_Mayek} |
Meetei Mayek. |
\p{InHangul_Syllables} |
Hangul Syllables. |
\p{InHangul_Jamo_Extended_B} |
Hangul Jamo Extended-B. |
\p{InHigh_Surrogates} |
High Surrogates. |
\p{InHigh_Private_Use_Surrogates} |
High Private Use Surrogates. |
\p{InLow_Surrogates} |
Low Surrogates. |
\p{InPrivate_Use_Area} |
Private Use Area. |
\p{InCJK_Compatibility_Ideographs} |
CJK Compatibility Ideographs. |
\p{InAlphabetic_Presentation_Forms} |
Alphabetic Presentation Forms. |
\p{InArabic_Presentation_Forms_A} |
Arabic Presentation Forms-A. |
\p{InVariation_Selectors} |
Variation Selectors. |
\p{InVertical_Forms} |
Vertical Forms. |
\p{InCombining_Half_Marks} |
Combining Half Marks. |
\p{InCJK_Compatibility_Forms} |
CJK Compatibility Forms. |
\p{InSmall_Form_Variants} |
Small Form Variants. |
\p{InArabic_Presentation_Forms_B} |
Arabic Presentation Forms-B. |
\p{InHalfwidth_and_Fullwidth_Forms} |
Halfwidth and Fullwidth Forms. |
\p{InSpecials} |
Specials. |
\p{InLinear_B_Syllabary} |
Linear B Syllabary. |
\p{InLinear_B_Ideograms} |
Linear B Ideograms. |
\p{InAegean_Numbers} |
Aegean Numbers. |
\p{InAncient_Greek_Numbers} |
Ancient Greek Numbers. |
\p{InAncient_Symbols} |
Ancient Symbols. |
\p{InPhaistos_Disc} |
Phaistos Disc. |
\p{InLycian} |
Lycian. |
\p{InCarian} |
Carian. |
\p{InCoptic_Epact_Numbers} |
Coptic Epact Numbers. |
\p{InOld_Italic} |
Old Italic. |
\p{InGothic} |
Gothic. |
\p{InOld_Permic} |
Old Permic. |
\p{InUgaritic} |
Ugaritic. |
\p{InOld_Persian} |
Old Persian. |
\p{InDeseret} |
Deseret. |
\p{InShavian} |
Shavian. |
\p{InOsmanya} |
Osmanya. |
\p{InOsage} |
Osage. |
\p{InElbasan} |
Elbasan. |
\p{InCaucasian_Albanian} |
Caucasian Albanian. |
\p{InVithkuqi} |
Vithkuqi. |
\p{InTodhri} |
Todhri. |
\p{InLinear_A} |
Linear A. |
\p{InLatin_Extended_F} |
Latin Extended-F. |
\p{InCypriot_Syllabary} |
Cypriot Syllabary. |
\p{InImperial_Aramaic} |
Imperial Aramaic. |
\p{InPalmyrene} |
Palmyrene. |
\p{InNabataean} |
Nabataean. |
\p{InHatran} |
Hatran. |
\p{InPhoenician} |
Phoenician. |
\p{InLydian} |
Lydian. |
\p{InMeroitic_Hieroglyphs} |
Meroitic Hieroglyphs. |
\p{InMeroitic_Cursive} |
Meroitic Cursive. |
\p{InKharoshthi} |
Kharoshthi. |
\p{InOld_South_Arabian} |
Old South Arabian. |
\p{InOld_North_Arabian} |
Old North Arabian. |
\p{InManichaean} |
Manichaean. |
\p{InAvestan} |
Avestan. |
\p{InInscriptional_Parthian} |
Inscriptional Parthian. |
\p{InInscriptional_Pahlavi} |
Inscriptional Pahlavi. |
\p{InPsalter_Pahlavi} |
Psalter Pahlavi. |
\p{InOld_Turkic} |
Old Turkic. |
\p{InOld_Hungarian} |
Old Hungarian. |
\p{InHanifi_Rohingya} |
Hanifi Rohingya. |
\p{InGaray} |
Garay. |
\p{InRumi_Numeral_Symbols} |
Rumi Numeral Symbols. |
\p{InYezidi} |
Yezidi. |
\p{InArabic_Extended_C} |
Arabic Extended-C. |
\p{InOld_Sogdian} |
Old Sogdian. |
\p{InSogdian} |
Sogdian. |
\p{InOld_Uyghur} |
Old Uyghur. |
\p{InChorasmian} |
Chorasmian. |
\p{InElymaic} |
Elymaic. |
\p{InBrahmi} |
Brahmi. |
\p{InKaithi} |
Kaithi. |
\p{InSora_Sompeng} |
Sora Sompeng. |
\p{InChakma} |
Chakma. |
\p{InMahajani} |
Mahajani. |
\p{InSharada} |
Sharada. |
\p{InSinhala_Archaic_Numbers} |
Sinhala Archaic Numbers. |
\p{InKhojki} |
Khojki. |
\p{InMultani} |
Multani. |
\p{InKhudawadi} |
Khudawadi. |
\p{InGrantha} |
Grantha. |
\p{InTulu-Tigalari} |
Tulu-Tigalari. |
\p{InNewa} |
Newa. |
\p{InTirhuta} |
Tirhuta. |
\p{InSiddham} |
Siddham. |
\p{InModi} |
Modi. |
\p{InMongolian_Supplement} |
Mongolian Supplement. |
\p{InTakri} |
Takri. |
\p{InMyanmar_Extended-C} |
Myanmar Extended C. |
\p{InAhom} |
Ahom. |
\p{InDogra} |
Dogra. |
\p{InWarang_Citi} |
Warang Citi. |
\p{InDives_Akuru} |
Dives Akuru. |
\p{InNandinagari} |
Nandinagari. |
\p{InZanabazar_Square} |
Zanabazar Square. |
\p{InSoyombo} |
Soyombo. |
\p{InUnified_Canadian_Aboriginal_Syllabics_Extended_A} |
Unified Canadian Aboriginal Syllabics Extended-A. |
\p{InPau_Cin_Hau} |
Pau Cin Hau. |
\p{InDevanagari_Extended_A} |
Devanagari Extended-A. |
\p{InSunuwar} |
Sunuwar. |
\p{InBhaiksuki} |
Bhaiksuki. |
\p{InMarchen} |
Marchen. |
\p{InMasaram_Gondi} |
Masaram Gondi. |
\p{InGunjala_Gondi} |
Gunjala Gondi. |
\p{InMakasar} |
Makasar. |
\p{InKawi} |
Kawi. |
\p{InLisu_Supplement} |
Lisu Supplement. |
\p{InTamil_Supplement} |
Tamil Supplement. |
\p{InCuneiform} |
Cuneiform. |
\p{InCuneiform_Numbers_and_Punctuation} |
Cuneiform Numbers and Punctuation. |
\p{InEarly_Dynastic_Cuneiform} |
Early Dynastic Cuneiform. |
\p{InCypro_Minoan} |
Cypro-Minoan. |
\p{InEgyptian_Hieroglyphs} |
Egyptian Hieroglyphs. |
\p{InEgyptian_Hieroglyph_Format_Controls} |
Egyptian Hieroglyph Format Controls. |
\p{InEgyptian_Hieroglyphs_Extended-A} |
Egyptian Hieroglyph Extended A. |
\p{InAnatolian_Hieroglyphs} |
Anatolian Hieroglyphs. |
\p{InGurung_Khema} |
Gurung Khema. |
\p{InBamum_Supplement} |
Bamum Supplement. |
\p{InMro} |
Mro. |
\p{InTangsa} |
Tangsa. |
\p{InBassa_Vah} |
Bassa Vah. |
\p{InPahawh_Hmong} |
Pahawh Hmong. |
\p{InKirat_Rai} |
Kirat Rai. |
\p{InMedefaidrin} |
Medefaidrin. |
\p{InMiao} |
Miao. |
\p{InIdeographic_Symbols_and_Punctuation} |
Ideographic Symbols and Punctuation. |
\p{InTangut} |
Tangut. |
\p{InTangut_Components} |
Tangut Components. |
\p{InKhitan_Small_Script} |
Khitan Small Script. |
\p{InTangut_Supplement} |
Tangut Supplement. |
\p{InKana_Extended_B} |
Kana Extended-B. |
\p{InKana_Supplement} |
Kana Supplement. |
\p{InKana_Extended_A} |
Kana Extended-A. |
\p{InSmall_Kana_Extension} |
Small Kana Extension. |
\p{InNushu} |
Nushu. |
\p{InDuployan} |
Duployan. |
\p{InShorthand_Format_Controls} |
Shorthand Format Controls. |
\p{InSymbols_for_Legacy_Computing_Supplement} |
Symbols for Legacy Computing Supplement. |
\p{InZnamenny_Musical_Notation} |
Znamenny Musical Notation. |
\p{InByzantine_Musical_Symbols} |
Byzantine Musical Symbols. |
\p{InMusical_Symbols} |
Musical Symbols. |
\p{InAncient_Greek_Musical_Notation} |
Ancient Greek Musical Notation. |
\p{InKaktovik_Numerals} |
Kaktovik Numerals. |
\p{InMayan_Numerals} |
Mayan Numerals. |
\p{InTai_Xuan_Jing_Symbols} |
Tai Xuan Jing Symbols. |
\p{InCounting_Rod_Numerals} |
Counting Rod Numerals. |
\p{InMathematical_Alphanumeric_Symbols} |
Mathematical Alphanumeric Symbols. |
\p{InSutton_SignWriting} |
Sutton SignWriting. |
\p{InLatin_Extended_G} |
Latin Extended-G. |
\p{InGlagolitic_Supplement} |
Glagolitic Supplement. |
\p{InCyrillic_Extended_D} |
Cyrillic Extended-D. |
\p{InNyiakeng_Puachue_Hmong} |
Nyiakeng Puachue Hmong. |
\p{InToto} |
Toto. |
\p{InWancho} |
Wancho. |
\p{InNag_Mundari} |
Nag Mundari. |
\p{InOl_Onal} |
Ol Onal. |
\p{InEthiopic_Extended_B} |
Ethiopic Extended-B. |
\p{InMende_Kikakui} |
Mende Kikakui. |
\p{InAdlam} |
Adlam. |
\p{InIndic_Siyaq_Numbers} |
Indic Siyaq Numbers. |
\p{InOttoman_Siyaq_Numbers} |
Ottoman Siyaq Numbers. |
\p{InArabic_Mathematical_Alphabetic_Symbols} |
Arabic Mathematical Alphabetic Symbols. |
\p{InMahjong_Tiles} |
Mahjong Tiles. |
\p{InDomino_Tiles} |
Domino Tiles. |
\p{InPlaying_Cards} |
Playing Cards. |
\p{InEnclosed_Alphanumeric_Supplement} |
Enclosed Alphanumeric Supplement. |
\p{InEnclosed_Ideographic_Supplement} |
Enclosed Ideographic Supplement. |
\p{InMiscellaneous_Symbols_and_Pictographs} |
Miscellaneous Symbols and Pictographs. |
\p{InEmoticons} |
Emoticons. |
\p{InOrnamental_Dingbats} |
Ornamental Dingbats. |
\p{InTransport_and_Map_Symbols} |
Transport and Map Symbols. |
\p{InAlchemical_Symbols} |
Alchemical Symbols. |
\p{InGeometric_Shapes_Extended} |
Geometric Shapes Extended. |
\p{InSupplemental_Arrows_C} |
Supplemental Arrows-C. |
\p{InSupplemental_Symbols_and_Pictographs} |
Supplemental Symbols and Pictographs. |
\p{InChess_Symbols} |
Chess Symbols. |
\p{InSymbols_and_Pictographs_Extended_A} |
Symbols and Pictographs Extended-A. |
\p{InSymbols_for_Legacy_Computing} |
Symbols for Legacy Computing. |
\p{InCJK_Unified_Ideographs_Extension_B} |
CJK Unified Ideographs Extension B. |
\p{InCJK_Unified_Ideographs_Extension_C} |
CJK Unified Ideographs Extension C. |
\p{InCJK_Unified_Ideographs_Extension_D} |
CJK Unified Ideographs Extension D. |
\p{InCJK_Unified_Ideographs_Extension_E} |
CJK Unified Ideographs Extension E. |
\p{InCJK_Unified_Ideographs_Extension_F} |
CJK Unified Ideographs Extension F. |
\p{InCJK_Unified_Ideographs_Extension_I} |
CJK Unified Ideographs Extension I. |
\p{InCJK_Compatibility_Ideographs_Supplement} |
CJK Compatibility Ideographs Supplement. |
\p{InCJK_Unified_Ideographs_Extension_G} |
CJK Unified Ideographs Extension G. |
\p{InCJK_Unified_Ideographs_Extension_H} |
CJK Unified Ideographs Extension H. |
\p{InTags} |
Tags. |
\p{InVariation_Selectors_Supplement} |
Variation Selectors Supplement. |
\p{InSupplementary_Private_Use_Area_A} |
Supplementary Private Use Area-A. |
\p{InSupplementary_Private_Use_Area_B} |
Supplementary Private Use Area-B. |
Sequence |
Meaning |
---|---|
\p{IsAdlam} |
Adlam. |
\p{IsAhom} |
Ahom. |
\p{IsAnatolian_Hieroglyphs} |
Anatolian Hieroglyphs. |
\p{IsArabic} |
Arabic. |
\p{IsArmenian} |
Armenian. |
\p{IsAvestan} |
Avestan. |
\p{IsBalinese} |
Balinese. |
\p{IsBamum} |
Bamum. |
\p{IsBassa_Vah} |
Bassa Vah. |
\p{IsBatak} |
Batak. |
\p{IsBengali} |
Bengali. |
\p{IsBhaiksuki} |
Bhaiksuki. |
\p{IsBopomofo} |
Bopomofo. |
\p{IsBrahmi} |
Brahmi. |
\p{IsBraille} |
Braille. |
\p{IsBuginese} |
Buginese. |
\p{IsBuhid} |
Buhid. |
\p{IsCanadian_Aboriginal} |
Canadian Aboriginal. |
\p{IsCarian} |
Carian. |
\p{IsCaucasian_Albanian} |
Caucasian Albanian. |
\p{IsChakma} |
Chakma. |
\p{IsCham} |
Cham. |
\p{IsCherokee} |
Cherokee. |
\p{IsChorasmian} |
Chorasmian. |
\p{IsCommon} |
Common. |
\p{IsCoptic} |
Coptic. |
\p{IsCuneiform} |
Cuneiform. |
\p{IsCypriot} |
Cypriot. |
\p{IsCypro_Minoan} |
Cypro-Minoan |
\p{IsCyrillic} |
Cyrillic. |
\p{IsDeseret} |
Deseret. |
\p{IsDevanagari} |
Devanagari. |
\p{IsDives_Akuru} |
Dives Akuru. |
\p{IsDogra} |
Dogra. |
\p{IsDuployan} |
Duployan. |
\p{IsEgyptian_Hieroglyphs} |
Egyptian Hieroglyphs. |
\p{IsElbasan} |
Elbasan. |
\p{IsElymaic} |
Elymaic. |
\p{IsEthiopic} |
Ethiopic. |
\p{IsGaray} |
Garay. |
\p{IsGeorgian} |
Georgian. |
\p{IsGlagolitic} |
Glagolitic. |
\p{IsGothic} |
Gothic. |
\p{IsGrantha} |
Grantha. |
\p{IsGreek} |
Greek. |
\p{IsGujarati} |
Gujarati. |
\p{IsGunjala_Gondi} |
Gunjala Gondi. |
\p{IsGurmukhi} |
Gurmukhi. |
\p{IsGurung_Khema} |
Gurung Khema. |
\p{IsHan} |
Han. |
\p{IsHangul} |
Hangul. |
\p{IsHanifi_Rohingya} |
Hanifi Rohingya. |
\p{IsHanunoo} |
Hanunoo. |
\p{IsHatran} |
Hatran. |
\p{IsHebrew} |
Hebrew. |
\p{IsHiragana} |
Hiragana. |
\p{IsImperial_Aramaic} |
Imperial Aramaic. |
\p{IsInherited} |
Inherited. |
\p{IsInscriptional_Pahlavi} |
Inscriptional Pahlavi. |
\p{IsInscriptional_Parthian} |
Inscriptional Parthian. |
\p{IsJavanese} |
Javanese. |
\p{IsKaithi} |
Kaithi. |
\p{IsKannada} |
Kannada. |
\p{IsKatakana} |
Katakana. |
\p{IsKawi} |
Kawi. |
\p{IsKayah_Li} |
Kayah Li. |
\p{IsKharoshthi} |
Kharoshthi. |
\p{IsKhitan_Small_Script} |
Khitan Small Script. |
\p{IsKhmer} |
Khmer. |
\p{IsKhojki} |
Khojki. |
\p{IsKhudawadi} |
Khudawadi. |
\p{IsKirat_Rai} |
Kirat Rai. |
\p{IsLao} |
Lao. |
\p{IsLatin} |
Latin. |
\p{IsLepcha} |
Lepcha. |
\p{IsLimbu} |
Limbu. |
\p{IsLinear_A} |
Linear A. |
\p{IsLinear_B} |
Linear B. |
\p{IsLisu} |
Lisu. |
\p{IsLycian} |
Lycian. |
\p{IsLydian} |
Lydian. |
\p{IsMahajani} |
Mahajani. |
\p{IsMakasar} |
Makasar. |
\p{IsMalayalam} |
Malayalam. |
\p{IsMandaic} |
Mandaic. |
\p{IsManichaean} |
Manichaean. |
\p{IsMarchen} |
Marchen. |
\p{IsMasaram_Gondi} |
Masaram Gondi. |
\p{IsMedefaidrin} |
Medefaidrin. |
\p{IsMeetei_Mayek} |
Meetei Mayek. |
\p{IsMende_Kikakui} |
Mende Kikakui. |
\p{IsMeroitic_Cursive} |
Meroitic Cursive. |
\p{IsMeroitic_Hieroglyphs} |
Meroitic Hieroglyphs. |
\p{IsMiao} |
Miao. |
\p{IsModi} |
Modi. |
\p{IsMongolian} |
Mongolian. |
\p{IsMro} |
Mro. |
\p{IsMultani} |
Multani. |
\p{IsMyanmar} |
Myanmar. |
\p{IsNabataean} |
Nabataean. |
\p{IsNag_Mundari} |
Nag Mundari. |
\p{IsNandinagari} |
Nandinagari. |
\p{IsNew_Tai_Lue} |
New Tai Lue. |
\p{IsNewa} |
Newa. |
\p{IsNko} |
Nko. |
\p{IsNushu} |
Nushu. |
\p{IsNyiakeng_Puachue_Hmong} |
Nyiakeng Puachue Hmong. |
\p{IsOgham} |
Ogham. |
\p{IsOl_Chiki} |
Ol Chiki. |
\p{IsOl_Onal} |
Ol Onal. |
\p{IsOld_Hungarian} |
Old Hungarian. |
\p{IsOld_Italic} |
Old Italic. |
\p{IsOld_North_Arabian} |
Old North Arabian. |
\p{IsOld_Permic} |
Old Permic. |
\p{IsOld_Persian} |
Old Persian. |
\p{IsOld_Sogdian} |
Old Sogdian. |
\p{IsOld_South_Arabian} |
Old South Arabian. |
\p{IsOld_Turkic} |
Old Turkic. |
\p{IsOld_Uyghur} |
Old Uyghur. |
\p{IsOriya} |
Oriya. |
\p{IsOsage} |
Osage. |
\p{IsOsmanya} |
Osmanya. |
\p{IsPahawh_Hmong} |
Pahawh Hmong. |
\p{IsPalmyrene} |
Palmyrene. |
\p{IsPau_Cin_Hau} |
Pau Cin Hau. |
\p{IsPhags_Pa} |
Phags Pa. |
\p{IsPhoenician} |
Phoenician. |
\p{IsPsalter_Pahlavi} |
Psalter Pahlavi. |
\p{IsRejang} |
Rejang. |
\p{IsRunic} |
Runic. |
\p{IsSamaritan} |
Samaritan. |
\p{IsSaurashtra} |
Saurashtra. |
\p{IsSharada} |
Sharada. |
\p{IsShavian} |
Shavian. |
\p{IsSiddham} |
Siddham. |
\p{IsSignWriting} |
SignWriting. |
\p{IsSinhala} |
Sinhala. |
\p{IsSogdian} |
Sogdian. |
\p{IsSora_Sompeng} |
Sora Sompeng. |
\p{IsSoyombo} |
Soyombo. |
\p{IsSundanese} |
Sundanese. |
\p{IsSunuwar} |
Sunuwar. |
\p{IsSyloti_Nagri} |
Syloti Nagri. |
\p{IsSyriac} |
Syriac. |
\p{IsTagalog} |
Tagalog. |
\p{IsTagbanwa} |
Tagbanwa. |
\p{IsTai_Le} |
Tai Le. |
\p{IsTai_Tham} |
Tai Tham. |
\p{IsTai_Viet} |
Tai Viet. |
\p{IsTakri} |
Takri. |
\p{IsTamil} |
Tamil. |
\p{IsTangsa} |
Tangsa. |
\p{IsTangut} |
Tangut. |
\p{IsTelugu} |
Telugu. |
\p{IsThaana} |
Thaana. |
\p{IsThai} |
Thai. |
\p{IsTibetan} |
Tibetan. |
\p{IsTifinagh} |
Tifinagh. |
\p{IsTirhuta} |
Tirhuta. |
\p{IsTodhri} |
Todhri. |
\p{IsToto} |
Toto. |
\p{IsTulu_Tigalari} |
Tulu Tigalari. |
\p{IsUgaritic} |
Ugaritic. |
\p{IsVai} |
Vai. |
\p{IsVithkuqi} |
Vithkuqi. |
\p{IsWancho} |
Wancho. |
\p{IsWarang_Citi} |
Warang Citi. |
\p{IsYezidi} |
Yezidi. |
\p{IsYi} |
Yi. |
\p{IsZanabazar_Square} |
Zanabazar Square. |
Sequence |
Meaning |
---|---|
...|... |
Try subpatterns in alternation. |
* |
Match 0 or more times (greedy). |
+ |
Match 1 or more times (greedy). |
? |
Match 0 or 1 times (greedy). |
{n} |
Match exactly n times. |
{n,} |
Match at least n times (greedy). |
{n,m} |
Match at least n times but no more than m times (greedy). |
*? |
Match 0 or more times (abstemious). |
+? |
Match 1 or more times (abstemious). |
?? |
Match 0 or 1 times (abstemious). |
{n,}? |
Match at least n times (abstemious). |
{n,m}? |
Match at least n times but no more than m times (abstemious). |
{MACRO} |
Include the regex MACRO in the current regex. |
Sequence |
Meaning |
---|---|
^ |
Start of string or after a newline. |
$ |
End of string or before a newline. |
Sequence |
Meaning |
---|---|
(...) |
Group a regular expression to override default operator precedence. |
(?r-s:pattern) |
Apply option r and omit option s while interpreting pattern. Options may be zero or more of the characters i, s, or x. i means case-insensitive. -i means case-sensitive. s alters the meaning of '.' to match any character whatsoever. -s alters the meaning of '.' to match any character except '\n' x ignores comments and whitespace in patterns. Whitespace is ignored unless it is backslash-escaped, contained within ""s, or appears inside a character range. These options can be applied globally at the rules level by passing a combination of the
bit flags |
(?# comment ) |
Omit everything within (). The first ) character encountered ends the pattern. It is not possible for the comment to contain a ) character. The comment may span lines. |
The following POSIX character sets are supported: [:alnum:]
,
[:alpha:]
, [:blank:]
, [:cntrl:]
,
[:digit:]
, [:graph:]
, [:lower:]
,
[:print:]
, [:punct:]
, [:space:]
,
[:upper:]
and [:xdigit:]
.
As you will see from the examples below, match_results
or
recursive_match_results
are the structs passed to lookup. Like flex,
end of input returns an id of 0 (you can set this to another value if you like using
the eoi()
method on the rules
class) and an unrecognised
token returns a single character. The id
returned is npos
in this case. This value is available from a static function npos()
on
match_results
.
match_results
has the following members:
id_type id; id_type user_id; iter_type first; iter_type second; iter_type eoi; bool bol; id_type state;
id
defines the numeric id for the current match.user_id
can optionally be set as a second value
(e.g. an index into a table of semantic actions).first
is the start of the token.second
is the end of the token.eoi
is the end of the input.bol
is a flag indicating either start of input or
the last character read being \n
.state
is the current start state.recursive_match_results
derives from match_results
and adds the following:
std::stack<id_type_pair> stack;This struct is used when recursive rules have been defined.
Note that various typedef
s are available for
match_results
and they are used below. The typedef
s
follow the convention used by std::regex
.
#include <lexertl/generator.hpp> #include <iostream> #include <lexertl/lookup.hpp> int main() { lexertl::rules rules; lexertl::state_machine sm; rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::generator::build(rules, sm); std::string input("abc012Ad3e4"); lexertl::smatch results(input.begin(), input.end()); // Read ahead lexertl::lookup(sm, results); while (results.id != 0) { std::cout << "Id: " << results.id << ", Token: '" << results.str () << "'\n"; lexertl::lookup(sm, results); } return 0; }
#include <lexertl/generator.hpp> #include <iostream> #include <lexertl/iterator.hpp> #include <lexertl/lookup.hpp> int main() { lexertl::rules rules; lexertl::state_machine sm; rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::generator::build(rules, sm); std::string input("abc012Ad3e4"); lexertl::siterator iter(input.begin(), input.end(), sm); lexertl::siterator end; for (; iter != end; ++iter) { std::cout << "Id: " << iter->id << ", Token: '" << iter->str() << "'\n"; } return 0; }
It is possible to modify the results object to (re)start at a specific position. Because there are multiple state variables inside the results object, care needs to be taken to set it up correctly when you do this.
First of all the most important member to set is second
.
Although the lookup automatically copies this to first
when fetching the next token,
I recommend setting first
at the same time for completeness (you could set
id
to npos
too if you really wanted to!) If you are using
"beginning of line" matching, then you need to set the bol
flag accordingly,
so that the lexer knows whether the next token starts at the beginning of a
line or not. Finally, if you are using 'multi state' lexing, be sure to set the
state
member correctly.
Here is an example:
#include <lexertl/generator.hpp> #include <iostream> #include <lexertl/lookup.hpp> int main() { lexertl::rules rules; lexertl::state_machine sm; std::string input("can\ncmd\na cmd\ncmd again\nanother cmd"); // This time we set the iterator to end(), because we are going to reset // it anyway. lexertl::smatch results(input.end(), input.end()); rules.push("can", 1); rules.push("^cmd$", 2); rules.push("^cmd", 3); rules.push("cmd$", 4); rules.push("[a-z]+", 50); rules.push("\\s+", 100); lexertl::generator::build(rules, sm); // Skip the first 4 characters results.first = results.second = input.begin () + 4; results.bol = true; // Redundant statement, but just for example: results.state = 0; // Look ahead lexertl::lookup(sm, results); while (results.id != 0) { std::cout << "Id: " << results.id << ", Token: '" << results.str() << "'\n"; lexertl::lookup(sm, results); } return 0; }
#include <lexertl/debug.hpp> #include <lexertl/generator.hpp> #include <iostream> int main () { lexertl::rules rules; lexertl::state_machine sm; rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::generator::build(rules, sm); sm.minimise (); lexertl::debug::dump(sm, std::cout); return 0; }
If you wish to write your own code generator, you will find it useful to
generate a char state machine as it contains states with simple transitions
from character ranges to other states. Even in Unicode mode this allows you
to observe real character ranges. In contrast state_machine
has
a two phase lookup and always slices unicode chars into bytes in order to
compress the data.
#include <lexertl/debug.hpp> #include <lexertl/generator.hpp> #include <iostream> int main () { lexertl::rules rules; lexertl::char_state_machine csm; rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::char_generator::build(rules, csm); lexertl::debug::dump(csm, std::cout); return 0; }
See state_machine.hpp
for the char_state_machine
structure.
There are two classes to support iterating through a file included with lexertl
which are memory_file
and stream_shared_iterator
:
#include <fstream> #include <lexertl/generator.hpp> #include <iostream> #include <lexertl/lookup.hpp> #include <lexertl/memory_file.hpp> int main () { lexertl::rules rules; lexertl::state_machine sm; lexertl::memory_file mf("C:\\Ben\\Dev\\lexertl\\policy.txt"); lexertl::cmatch results(mf.data(), mf.data() + mf.size()); rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::generator::build(rules, sm); // Look ahead lexertl::lookup(sm, results); while (results.id != 0) { std::cout << "Id: " << results.id << ", Token: '" << results.str() << "'\n"; lexertl::lookup(sm, results); } return 0; }
#include <lexertl/generator.hpp> #include <lexertl/iterator.hpp> #include <lexertl/memory_file.hpp> #include <lexertl/utf_iterators.hpp> int main() { try { using rules_type = lexertl::basic_rules<char, char32_t>; using utf8_iterator = lexertl::cutf8_in_utf32_out_iterator>; using results = lexertl::match_results<utf8_iterator>; using iterator = lexertl::iterator<utf8_iterator, lexertl::u32state_machine, results>; rules_type rules; lexertl::u32state_machine sm; lexertl::memory_file mf(R"(C:\Users\bh\Desktop\Unicode 2.0 test page.html)"); rules.push(R"((\p{InIPA_Extensions}[ \r\n]+)+)", 1); rules.push("(?s:.)", rules.skip()); lexertl::basic_generator<rules_type, lexertl::u32state_machine>::build(rules, sm); iterator iter(utf8_iterator(mf.data(), mf.data() + mf.size()), utf8_iterator(mf.data() + mf.size(), mf.data() + mf.size()), sm); iterator end; for (; iter != end; ++iter) { char32_t c = *iter->first; int i = 0; } } catch (const std::exception& e) { std::cout << e.what() << '\n'; } return 0; }
Currently there is a single code generator that generates C++ table based lookup:
#include <lexertl/generator.hpp> #include <lexertl/generate_cpp.hpp> #include <iostream> int main () { lexertl::rules rules; lexertl::state_machine sm; rules.push("[0-9]+", 1); rules.push("[a-z]+", 2); lexertl::generator::build(rules, sm); sm.minimise(); lexertl::table_based_cpp::generate_cpp("lookup", sm, false, std::cout); return 0; }
If you want to generate code for another language, then you should easily be able to use this generator as a template.
A lexer can have more than one state machine. This allows you to lex different tokens depending on context, thus allowing simple parsing to take place. To allow this, a 'start state' must be specified additionally at the beginning of the rules_.push() call and an 'exit state' at the end. If '*' is used as start state, then the rule is applied to all lexer states. If '.' is specified as the exit state, then the lexer state is unchanged when that rule matches.
To demonstrate, the classic example below shows matching multi-line C comments using start states:
#include <lexertl/generator.hpp> #include <lexertl/lookup.hpp> int main() { lexertl::rules rules; lexertl::state_machine sm; rules.push_state("COMMENT"); rules.push("INITIAL", "\"/*\"", "COMMENT"); rules.push("COMMENT", "[^*]+|(?s:.)", "."); rules.push("COMMENT", "\"*/\"", 1, "INITIAL"); lexertl::generator::build(rules, sm); std::string input("/* test */"); lexertl::smatch results(input.begin(), input.end()); do { lexertl::lookup(sm, results); } while (results.id != 0); return 0; }
Note that by omitting ids for all but the end of the sequence, we can get the lexer to continue even though it has changed state.
I have since written parsertl as a complimentary parser generator, but this recursive mode explanation is retained for general interest.
To make this work we need a couple of tweaks. To start with we need to introduce the
concept of pushing and popping states. An exit state with '>
' before the
name means push, whereas '<
' for the exit state means pop.
It turns out we also need a means for continuing even when a particular rule has matched. We achieve this by omitting an id when adding a rule. Finally, note that even when we specify an id (as normal) for the exit conditions, the rule will only finish when all previous pushes (with no id specified) are popped.
See the example below where all of these concepts are put together in order to parse recursive C style comments.
#include <lexertl/generator.hpp> #include <iostream> #include <lexertl/lookup.hpp> int main() { enum {eEOI, eComment}; lexertl::rules rules; lexertl::state_machine sm; std::string input("/* /* recursive */*/"); rules.push_state("OPEN"); rules.push("INITIAL,OPEN", "[/][*]", ">OPEN"); rules.push("OPEN", "(?s:.)", "."); rules.push("OPEN", "[*][/]", 1, "<"); lexertl::generator::build(rules, sm); lexertl::srmatch results(input.begin(), input.end()); // Read ahead lexertl::lookup(sm, results); while (results.id != eEOI && results.id != results.npos()) { switch (results.id) { case eComment: std::cout << results.str() << std::endl; break; default: std::cout << "Error at '" << &*results.first << "'\n"; break; } lexertl::lookup(sm, results); } return 0; }
A full grammar for parsing json is shown below:
#include <lexertl/generator.hpp> #include <lexertl/lookup.hpp> #include <lexertl/memory_file.hpp> #include <lexertl/utf_iterators.hpp> int main() { try { using urules = lexertl::basic_rules<char, char32_t>; using usm = lexertl::basic_state_machine<char32_t>; using utf_in_iter = lexertl::basic_utf8_in_iterator<const char *, char32_t>; using utf_out_iter = lexertl::basic_utf8_out_iterator<utf_in_iter>; urules rules; usm sm; lexertl::memory_file file("C:\\json.txt"); const char *begin = file.data(); const char *end = begin + file.size(); lexertl::recursive_match_results<utf_in_iter> results(utf_in_iter(begin, end), utf_in_iter(end, end)); enum {eEOF, eString, eNumber, eBoolean, eOpenOb, eName, eCloseOb, eOpenArr, eCloseArr, eNull}; // http://code.google.com/p/bsn-goldparser/wiki/JsonGrammar rules.insert_macro("STRING", "[\"]([ -\\x10ffff]{-}[\"\\\\]|" "\\\\([\"\\\\/bfnrt]|u[0-9a-fA-F]{4}))*[\"]"); rules.insert_macro("NUMBER", "-?(0|[1-9]\\d*)([.]\\d+)?([eE][-+]?\\d+)?"); rules.insert_macro("BOOL", "true|false"); rules.insert_macro("NULL", "null"); rules.push_state("END"); rules.push_state("OBJECT"); rules.push_state("NAME"); rules.push_state("COLON"); rules.push_state("OB_VALUE"); rules.push_state("OB_COMMA"); rules.push_state("ARRAY"); rules.push_state("ARR_COMMA"); rules.push_state("ARR_VALUE"); rules.push("INITIAL", "[{]", eOpenOb, ">OBJECT:END"); rules.push("INITIAL", "[[]", eOpenArr, ">ARRAY:END"); rules.push("OBJECT,OB_COMMA", "[}]", eCloseOb, "<"); rules.push("OBJECT,NAME", "{STRING}", eName, "COLON"); rules.push("COLON", ":", rules.skip(), "OB_VALUE"); rules.push("OB_VALUE", "{STRING}", eString, "OB_COMMA"); rules.push("OB_VALUE", "{NUMBER}", eNumber, "OB_COMMA"); rules.push("OB_VALUE", "{BOOL}", eBoolean, "OB_COMMA"); rules.push("OB_VALUE", "{NULL}", eNull, "OB_COMMA"); rules.push("OB_VALUE", "[{]", eOpenOb, ">OBJECT:OB_COMMA"); rules.push("OB_VALUE", "[[]", eOpenArr, ">ARRAY:OB_COMMA"); rules.push("OB_COMMA", ",", rules.skip(), "NAME"); rules.push("ARRAY,ARR_COMMA", "\\]", eCloseArr, "<"); rules.push("ARRAY,ARR_VALUE", "{STRING}", eString, "ARR_COMMA"); rules.push("ARRAY,ARR_VALUE", "{NUMBER}", eNumber, "ARR_COMMA"); rules.push("ARRAY,ARR_VALUE", "{BOOL}", eBoolean, "ARR_COMMA"); rules.push("ARRAY,ARR_VALUE", "{NULL}", eNull, "ARR_COMMA"); rules.push("ARRAY,ARR_VALUE", "[{]", eOpenOb, ">OBJECT:ARR_COMMA"); rules.push("ARRAY,ARR_VALUE", "[[]", eOpenArr, ">ARRAY:ARR_COMMA"); rules.push("ARR_COMMA", ",", rules.skip(), "ARR_VALUE"); rules.push("*", "[ \t\r\n]+", rules.skip(), "."); lexertl::basic_generator<urules, usm>::build(rules, sm); // Read-ahead lexertl::lookup(sm, results); while (results.id != eEOF) { std::cout << "Id: " << results.id << " token: " << std::string(utf_out_iter(results.first, results.second), utf_out_iter(results.second, results.second)) << " state = " << results.state << '\n'; lexertl::lookup(sm, results); } std::cout << "Stack has " << results.stack.size() << " values on it.\n"; } catch (const std::exception &e) { std::cout << e.what () << std::endl; } return 0; }
Thanks to Dave Handley for all his constructive criticism, performance testing, STL
and general C++ tips.
Thanks to Régis Vaquette for all his testing and enthusiastic encouragement!
Thanks to Hartmut Kaiser for all his performance testing, feature requests and advice
on compatibility with different compilers and, of course, boost.
Thanks to Hari Rangarajan for his bug reports and performance improvement suggestions.
For lexertl I would say just switch on the returned token id. If you want to use a map instead, see https://www.codeproject.com/Articles/1089463/Convert-EBNF-to-BNF-Using-parsertl-lexertl
There's mention of semantic actions but no example of those or explanation of how to create those.
Hi Michael, see licence_1_0.txt under the source directory (It is boost/BSD).
Hi Ben, Please let me know which license lexertl14 uses. Is it GPL? Thanks, Michael
You can easily do this yourself. Maintain a stack of match_results
(or iterators if you are using them) together with your input. Each time
you encounter an include, simply load the new text and create a new lookup
and push to your stack, popping when you have exhausted input.
Let me know if you have any problems with this approach.
Hi Ben, one question I have is how to implement include files in my input? Something like yy_switch_to_buffer.
Thanks
Adding operator -- to utf_out_iter
is the wrong way to go because it
is only outputting a piece of a char, 8 bits at a time.
Adding operator -- to utf_in_iter
is the wrong way to go, because
we don't want to mess with the match_results
.
Therefore, I have added operator + and - to the utf_in_iterators:
if (results.id == eString) { std::cout << std::string(utf_out_iter(results.first + 1, results.second - 1), utf_out_iter(results.second - 1, results.second - 1)); }
As far as a PEG engine goes, any examples are welcome! Personally on my list,
for when parsertl
is finished, are IDL, Windows resource file format and SQL.
Hi Ben, one issue I've encountered. Any thoughts?
Using the JSON grammar (for example) the string rule returns the full quoted string. My problem is that your iterator definitions do not seem to support operator-- so it is not possible to easily strip the leading and trailing quote characters. i.e.
"string" is returned as a literal "string" and extraction of string involves an
awkward intermediate std::string
construction.
P.S. One could use this to make a superb runtime PEG engine too. Push off all lexical analysis to the (right) place and concentrate on building PEG's based on token ID's.
A superb piece of work. Thanks.
Thank you. Look out for parsertl
which is coming very soon thanks to
http://web.cs.dal.ca/~sjackson/lalr1.html
shining a wonderfully clear light on the subject of LALR. Also, I've still got my eye on IELR and GLR.
A superb piece of work. Thanks.