Textadept
- Home |
- Download |
- Lua API |
- Source |
- Language Modules |
- Stats |
- Wiki |
- Mailing List
Contents
- lexer
- Overview
- Lexer Basics
- Advanced Techniques
- Code Folding
- Using Lexers
- Considerations
- Fields
- CLASS
- COMMENT
- CONSTANT
- DEFAULT
- ERROR
- FOLD_BASE
- FOLD_BLANK
- FOLD_HEADER
- FUNCTION
- IDENTIFIER
- KEYWORD
- LABEL
- NUMBER
- OPERATOR
- PREPROCESSOR
- REGEX
- STRING
- TYPE
- VARIABLE
- WHITESPACE
- alnum
- alpha
- any
- any_char
- ascii
- cntrl
- dec_num
- digit
- extend
- float
- graph
- hex_num
- integer
- lower
- newline
- nonnewline
- nonnewline_esc
- oct_num
- punct
- space
- style_bracebad
- style_bracelight
- style_calltip
- style_class
- style_comment
- style_constant
- style_controlchar
- style_default
- style_definition
- style_embedded
- style_error
- style_function
- style_identifier
- style_indentguide
- style_keyword
- style_label
- style_line_number
- style_nothing
- style_number
- style_operator
- style_preproc
- style_regex
- style_string
- style_tag
- style_type
- style_variable
- style_whitespace
- upper
- word
- xdigit
- Functions
- Tables
lexer
Lexes Scintilla documents with Lua and LPeg.
Overview
Lexers are the mechanism for highlighting the syntax of source code. Scintilla (the editing component behind Textadept and SciTE) traditionally uses static, compiled C++ lexers which are notoriously difficult to create and/or extend. On the other hand, lexers written with Lua make it easy to rapidly create new lexers, extend existing ones, and embed lexers within one another. They tend to be more readable than C++ lexers too.
Lexers are written using Parsing Expression Grammars, or PEGs, with the Lua LPeg library. The following table is taken from the LPeg documentation and summarizes all you need to know about constructing basic LPeg patterns. This module provides convenience functions for creating and working with other more advanced patterns and concepts.
Operator | Description |
---|---|
lpeg.P(string) |
Matches string literally. |
lpeg.P( n ) |
Matches exactly n characters. |
lpeg.S(string) |
Matches any character in string (Set). |
lpeg.R(" xy ") |
Matches any character between x and y (Range). |
patt^ n |
Matches at least n repetitions of patt . |
patt^- n |
Matches at most n repetitions of patt . |
patt1 * patt2 |
Matches patt1 followed by patt2 . |
patt1 + patt2 |
Matches patt1 or patt2 (ordered choice). |
patt1 - patt2 |
Matches patt1 if patt2 does not match. |
-patt |
Equivalent to ("" - patt) . |
#patt |
Matches patt but consumes no input. |
The first part of this document deals with rapidly constructing a simple lexer. The next part deals with more advanced techniques, such as custom coloring and embedding lexers within one another. Following that is a discussion about code folding, or being able to tell Scintilla what code blocks can be “folded” (hidden temporarily from view). After that, instructions on how to use LPeg lexers with the aforementioned Textadept and SciTE editors is listed. Finally, considerations on performance and limitations are given.
Lexer Basics
All lexers are contained in the lexers/ directory. Your new lexer will also be included in this directory. Before attempting to write one from scratch though, first determine if your programming language is similar to any of the 80+ languages supported. If so, you may be able to copy and modify that lexer, saving some time and effort. The filename of your lexer should be the name of your programming language in lower case followed by a .lua extension. For example, a new Lua lexer would have the name lua.lua.
Note: It is not recommended to use one-character language names like “b”, “c”, or “d”. These lexers happen to be named b_lang.lua, cpp.lua, and dmd.lua respectively, for example.
New Lexer Template
There is a lexers/template.txt file that contains a simple template for a new lexer. Feel free to use it, replacing the ‘?’s with the name of your lexer:
-- ? LPeg lexer.
local l = lexer
local token, word_match = l.token, l.word_match
local style, color = l.style, l.color
local P, R, S = lpeg.P, lpeg.R, lpeg.S
local M = {_NAME = '?'}
-- Whitespace.
local ws = token(l.WHITESPACE, l.space^1)
M._rules = {
{'whitespace', ws},
{'any_char', l.any_char},
}
M._tokenstyles = {
}
return M
The first 4 lines of code are simply defining convenience variables you will
be using often. The 5th and last lines define and return the lexer object
used by Scintilla; they are very important and must be part of every lexer.
The sixth line defines what is called a “token”, an essential building block
of a lexer. Tokens will be discussed shortly. The rest of the code defines a
set of grammar rules and token styles. Those will be discussed later. Note,
however, the M.
prefix in front of _rules
and _tokenstyles
: not only do
these tables belong to their respective lexers, but any non-local variables
should be prefixed by M.
so-as not to affect Lua’s global environment. All
in all, this is a minimal, working lexer that can be built on.
Tokens
Take a moment to think about your programming language’s structure. What kind of key elements does it have? In the template shown earlier, one predefined element all languages have is whitespace. Your language probably also has elements like comments, strings, and keywords. These elements are called “tokens”. They are the so-called “building blocks” of lexers. Source code is broken down into tokens and subsequently colored, resulting in the syntax highlighting you are familiar with. It is up to you how specific you would like your lexer to be when it comes to tokens. Perhaps you would like to only distinguish between keywords and identifiers, or maybe you would like to also recognize constants and built-in functions, methods, or libraries. The Lua lexer, for example, defines 11 tokens: whitespace, comments, strings, numbers, keywords, built-in functions, constants, built-in libraries, identifiers, labels, and operators. Even though constants, built-in functions, and built-in libraries are a subset of identifiers, it is helpful to Lua programmers for the lexer to distinguish between them all. It would have otherwise been perfectly acceptable to just recognize keywords and identifiers.
In a lexer, tokens are composed of a token name and an LPeg pattern that
matches a sequence of characters recognized to be an instance of that token.
Tokens are created using the token()
function. Let us examine the
“whitespace” token defined in the template shown earlier:
local ws = token(l.WHITESPACE, l.space^1)
At first glance, the first argument does not appear to be a string name and the second argument does not appear to be an LPeg pattern. Perhaps you were expecting something like:
local ws = token('whitespace', S('\t\v\f\n\r ')^1)
The lexer
(l
) module actually provides a convenient list of common token
names and common LPeg patterns for you to use. Token names include
DEFAULT
, WHITESPACE
, COMMENT
,
STRING
, NUMBER
, KEYWORD
,
IDENTIFIER
, OPERATOR
, ERROR
,
PREPROCESSOR
, CONSTANT
,
VARIABLE
, FUNCTION
, CLASS
,
TYPE
, LABEL
, and REGEX
. Patterns include
any
, ascii
, extend
, alpha
,
digit
, alnum
, lower
, upper
,
xdigit
, cntrl
, graph
, print
,
punct
, space
, newline
,
nonnewline
, nonnewline_esc
,
dec_num
, hex_num
, oct_num
,
integer
, float
, and word
. However, you
are not limited to the token names and LPeg patterns listed. You can do
whatever you like. However, the advantage of using predefined token names is
that your lexer’s tokens will inherit the universal syntax highlighting color
theme used by your text editor.
Example Tokens
So, how might other tokens like comments, strings, and keywords be defined? Here are some examples.
Comments
Line-style comments with a prefix character(s) are easy to express with LPeg:
local shell_comment = token(l.COMMENT, '#' * l.nonnewline^0)
local c_line_comment = token(l.COMMENT, '//' * l.nonnewline_esc^0)
The comments above start with a ‘#’ or “//” and go to the end of the line. The second comment recognizes the next line also as a comment if the current line ends with a ‘\’ escape character.
C-style “block” comments with a start and end delimiter are also easy to express:
local c_comment = token(l.COMMENT, '/*' * (l.any - '*/')^0 * P('*/')^-1)
This comment starts with a “/*” sequence and can contain anything up to, and including, an ending “*/” sequence. The ending “*/” is defined to be optional so that an unfinished comment is still matched as a comment and highlighted as you would expect.
Strings
It may be tempting to think that a string is not much different from the block comment shown above in that both have start and end delimiters:
local dq_str = '"' * (l.any - '"')^0 * P('"')^-1
local sq_str = "'" * (l.any - "'")^0 * P("'")^-1
local simple_string = token(l.STRING, dq_str + sq_str)
However, most programming languages allow escape sequences in strings such
that a sequence like “\"” in a double-quoted string indicates that the
‘"’ is not the end of the string. The above token would incorrectly
match such a string. Instead, a convenient function is provided for you:
delimited_range()
.
local dq_str = l.delimited_range('"', '\\', true)
local sq_str = l.delimited_range("'", '\\', true)
local string = token(l.STRING, dq_str + sq_str)
In this case, ‘\’ is treated as an escape character in a string sequence. The
true
argument is analogous to P('"')^-1
in that non-terminated strings
are highlighted as expected.
Keywords
Instead of matching n keywords with n P('keyword_
n
')
ordered
choices, another convenience function, word_match()
, is
provided. It is much easier and more efficient to write word matches like:
local keyword = token(l.KEYWORD, l.word_match{
'keyword_1', 'keyword_2', ..., 'keyword_n'
})
local case_insensitive_keyword = token(l.KEYWORD, l.word_match({
'KEYWORD_1', 'keyword_2', ..., 'KEYword_n'
}, nil, true))
local hyphened_keyword = token(l.KEYWORD, l.word_match({
'keyword-1', 'keyword-2', ..., 'keyword-n'
}, '-'))
By default, characters considered to be in keywords are in the set of alphanumeric characters and underscores. The last token demonstrates how to allow ‘-’ (hyphen) characters to be in keywords as well.
Numbers
Most programming languages have the same format for integer and float tokens, so it might be as simple as using a couple of predefined LPeg patterns:
local number = token(l.NUMBER, l.float + l.integer)
However, some languages allow postfix characters on integers.
local integer = P('-')^-1 * (l.dec_num * S('lL')^-1)
local number = token(l.NUMBER, l.float + l.hex_num + integer)
Your language may have other tweaks that may be necessary, but it is up to you how fine-grained you want your highlighting to be. After all, it is not like you are writing a compiler or interpreter!
Rules
Programming languages have grammars, which specify how their tokens may be
used structurally. For example, comments usually cannot appear within a
string. Grammars are broken down into rules, which are simply combinations of
tokens. Recall from the lexer template the _rules
table, which defines all
the rules used by the lexer grammar:
M._rules = {
{'whitespace', ws},
{'any_char', l.any_char},
}
Each entry in a lexer’s _rules
table is composed of a rule name and its
associated pattern. Rule names are completely arbitrary and serve only to
identify and distinguish between different rules. Rule order is important: if
text does not match the first rule, the second rule is tried, and so on. This
simple grammar says to match whitespace tokens under a rule named
“whitespace” and anything else under a rule named “any_char”.
To illustrate why rule order is important, here is an example of a simplified Lua grammar:
M._rules = {
{'whitespace', ws},
{'keyword', keyword},
{'identifier', identifier},
{'string', string},
{'comment', comment},
{'number', number},
{'label', label},
{'operator', operator},
{'any_char', l.any_char},
}
Note how identifiers come after keywords. In Lua, as with most programming languages, the characters allowed in keywords and identifiers are in the same set (alphanumerics plus underscores). If the “identifier” rule was listed before the “keyword” rule, all keywords would match identifiers and thus would be incorrectly highlighted as identifiers instead of keywords. The same idea applies to function, constant, etc. tokens that you may want to distinguish between: their rules should come before identifiers.
Now, you may be wondering what l.any_char
is and why the “any_char” rule
exists. l.any_char
is a special, predefined token that matches one single
character as a DEFAULT
token. The “any_char” rule should appear in every
lexer because there may be some text that does not match any of the rules you
defined. How is that possible? Well in Lua, for example, the ‘!’ character is
meaningless outside a string or comment. Therefore, if the lexer encounters a
‘!’ in such a circumstance, it would not match any existing rules other than
“any_char”. With “any_char”, the lexer can “skip” over the “error” and
continue highlighting the rest of the source file correctly. Without
“any_char”, the lexer would fail to continue. Perhaps you instead want your
language to highlight such “syntax errors”. You would replace the “any_char”
rule such that the grammar looks like:
M._rules = {
{'whitespace', ws},
{'error', token(l.ERROR, l.any)},
}
This would identify and highlight any character not matched by an existing
rule as an ERROR
token.
Even though the rules defined in the examples above contain a single token, rules can consist of multiple tokens. For example, a rule for an HTML tag could be composed of a tag token followed by an arbitrary number of attribute tokens, allowing all tokens to be highlighted separately. The rule might look something like this:
{'tag', tag_start * (ws * attributes)^0 * tag_end^-1}
Summary
Lexers are primarily composed of tokens and grammar rules. A number of convenience patterns and functions are available for rapidly creating a lexer. If you choose to use predefined token names for your tokens, you do not have to define how tokens are highlighted. They will inherit the default syntax highlighting color theme your editor uses.
Advanced Techniques
Styles and Styling
The most basic form of syntax highlighting is assigning different colors to
different tokens. Instead of highlighting with just colors, Scintilla allows
for more rich highlighting, or “styling”, with different fonts, font sizes,
font attributes, and foreground and background colors, just to name a few.
The unit of this rich highlighting is called a “style”. Styles are created
using the style()
function. By default, predefined token names
like WHITESPACE
, COMMENT
, STRING
, etc. are associated with a particular
style as part of a universal color theme. These predefined styles include
style_nothing
, style_class
,
style_comment
, style_constant
,
style_definition
, style_error
,
style_function
, style_keyword
,
style_label
, style_number
,
style_operator
, style_regex
,
style_string
, style_preproc
,
style_tag
, style_type
,
style_variable
, style_whitespace
,
style_embedded
, and
style_identifier
. Like with predefined token names and
LPeg patterns, you are not limited to these predefined styles. At their core,
styles are just Lua tables, so you can create new ones and/or modify existing
ones. Each style consists of a set of attributes:
Attribute | Description |
---|---|
font |
The name of the font the style uses. |
size |
The size of the font the style uses. |
bold |
Whether or not the font face is bold. |
italic |
Whether or not the font face is italic. |
underline |
Whether or not the font face is underlined. |
fore |
The foreground color of the font face. |
back |
The background color of the font face. |
eolfilled |
Does the background color extend to the end of the line? |
case |
The case of the font (1 = upper, 2 = lower, 0 = normal). |
visible |
Whether or not the text is visible. |
changeable |
Whether the text is changeable or read-only. |
hotspot |
Whether or not the text is clickable. |
Font colors are defined using the color()
function. Like with
token names, LPeg patterns, and styles, there is a set of predefined colors
in the l.colors
table, but the color names depend on the current theme
being used. It is generally not a good idea to manually define colors within
styles in your lexer because they might not fit into a user’s chosen color
theme. It is not even recommended to use a predefined color in a style
because that color may be theme-specific. Instead, the best practice is to
either use predefined styles or derive new color-agnostic styles from
predefined ones. For example, Lua “longstring” tokens use the existing
style_string
style instead of defining a new one.
Example Styles
Defining styles is pretty straightforward. An empty style that inherits the default theme settings is defined like this:
local style_nothing = l.style{}
A similar style but with a bold font face is defined like this:
local style_bold = l.style{bold = true}
If you wanted the same style, but also with an italic font face, you can define the new style in terms of the old one:
local style_bold_italic = style_bold..{italic = true}
This allows you to derive new styles from predefined ones without having to
rewrite them. This operation leaves the old style unchanged. Thus if you
had a “static variable” token whose style you wanted to base off of
style_variable
, it would probably look like:
local style_static_var = l.style_variable..{italic = true}
More examples of style definitions are in the color theme files in the lexers/themes/ folder.
Token Styles
Tokens are assigned to a particular style with the lexer’s _tokenstyles
table. Recall the token definition and _tokenstyles
table from the lexer
template:
local ws = token(l.WHITESPACE, l.space^1)
...
M._tokenstyles = {
}
Why is a style not assigned to the WHITESPACE
token? As mentioned earlier,
tokens that use predefined token names are automatically associated with a
particular style. Only tokens with custom token names need manual style
associations. As an example, consider a custom whitespace token:
local ws = token('custom_whitespace', l.space^1)
Assigning a style to this token looks like:
M._tokenstyles = {
{'custom_whitespace', l.style_whitespace}
}
Each entry in a lexer’s _tokenstyles
table is composed of a token’s name
and its associated style. Unlike with _rules
, the ordering in
_tokenstyles
does not matter since entries are just associations. Do not
confuse token names with rule names. They are completely different entities.
In the example above, the “custom_whitespace” token is just being assigned
the existing style for WHITESPACE
tokens. If instead you wanted to color
the background of whitespace a shade of grey, it might look like:
local style = l.style_whitespace..{back = l.colors.grey}
M._tokenstyles = {
{'custom_whitespace', style}
}
Remember it is generally not recommended to assign specific colors in styles, but in this case, the color grey likely exists in all user color themes.
Line Lexers
By default, lexers match the arbitrary chunks of text passed to them by
Scintilla. These chunks may be a full document, only the visible part of a
document, or even just portions of lines. Some lexers need to match whole
lines. For example, a lexer for the output of a file “diff” needs to know if
the line started with a ‘+’ or ‘-’ and then style the entire line
accordingly. To indicate that your lexer matches by line, use the
_LEXBYLINE
field:
M._LEXBYLINE = true
Now the input text for the lexer is a single line at a time. Keep in mind that line lexers do not have the ability to look ahead at subsequent lines.
Embedded Lexers
Lexers can be embedded within one another very easily, requiring minimal effort. In the following sections, the lexer being embedded is called the “child” lexer and the lexer a child is being embedded in is called the “parent”. For example, consider an HTML lexer and a CSS lexer. Either lexer can stand alone for styling their respective HTML and CSS files. However, CSS can be embedded inside HTML. In this specific case, the CSS lexer is referred to as the “child” lexer with the HTML lexer being the “parent”. Now consider an HTML lexer and a PHP lexer. This sounds a lot like the case with CSS, but there is a subtle difference: PHP embeds itself into HTML while CSS is embedded in HTML. This fundamental difference results in two types of embedded lexers: a parent lexer that embeds other child lexers in it (like HTML embedding CSS), and a child lexer that embeds itself within a parent lexer (like PHP embedding itself in HTML).
Parent Lexer
Before you can embed a child lexer into a parent lexer, the child lexer needs
to be loaded inside the parent. This is done with the load()
function. For example, loading the CSS lexer within the HTML lexer looks
like:
local css = l.load('css')
The next part of the embedding process is telling the parent lexer when to switch over to the child lexer and when to switch back. These indications are called the “start rule” and “end rule”, respectively, and are just LPeg patterns. Continuing with the HTML/CSS example, the transition from HTML to CSS is when a “style” tag with a “type” attribute whose value is “text/css” is encountered:
local css_tag = P('<style') * P(function(input, index)
if input:find('^[^>]+type="text/css"', index) then
return index
end
end)
This pattern looks for the beginning of a “style” tag and searches its
attribute list for the text “type="text/css"
”. (In this simplified example,
the Lua pattern does not consider whitespace between the ‘=’ nor does it
consider that single quotes can be used instead of double quotes.) If there
is a match, the functional pattern returns a value instead of nil
. In this
case, the value returned does not matter because we ultimately want the
“style” tag to be styled as an HTML tag, so the actual start rule looks like
this:
local css_start_rule = #css_tag * tag
Now that the parent knows when to switch to the child, it needs to know when to switch back. In the case of HTML/CSS, the switch back occurs when an ending “style” tag is encountered, but the tag should still be styled as an HTML tag:
local css_end_rule = #P('</style>') * tag
Once the child lexer is loaded and its start and end rules are defined, you
can embed it in the parent using the embed_lexer()
function:
l.embed_lexer(M, css, css_start_rule, css_end_rule)
The first parameter is the parent lexer object to embed the child in, which
in this case is M
. The other three parameters are the child lexer object
loaded earlier followed by its start and end rules.
Child Lexer
The process for instructing a child lexer to embed itself into a parent is
very similar to embedding a child into a parent: first, load the parent lexer
into the child lexer with the load()
function and then create
start and end rules for the child lexer. However, in this case, swap the
lexer object arguments to embed_lexer()
and indicate
through a _lexer
field in the child lexer that the parent should be used as
the primary lexer. For example, in the PHP lexer:
local html = l.load('hypertext')
local php_start_rule = token('php_tag', '<?php ')
local php_end_rule = token('php_tag', '?>')
l.embed_lexer(html, M, php_start_rule, php_end_rule)
M._lexer = html
The last line is very important. Without it, the PHP lexer’s rules would be used instead of the HTML lexer’s rules.
Code Folding
When reading source code, it is occasionally helpful to temporarily hide blocks of code like functions, classes, comments, etc. This concept is called “folding”. In the Textadept and SciTE editors for example, little indicators in the editor margins appear next to code that can be folded at places called “fold points”. When an indicator is clicked, the code associated with it is visually hidden until the indicator is clicked again. A lexer can specify these fold points and what code exactly to fold.
The fold points for most languages occur on keywords or character sequences.
Examples of fold keywords are “if” and “end” in Lua and examples of fold
character sequences are ‘{’, ‘}’, “/*”, and “*/” in C for code block and
comment delimiters, respectively. However, these fold points cannot occur
just anywhere. For example, fold keywords that appear within strings or
comments should not be recognized as fold points. Your lexer can conveniently
define fold points with such granularity in a _foldsymbols
table. For
example, consider C:
M._foldsymbols = {
[l.OPERATOR] = {['{'] = 1, ['}'] = -1},
[l.COMMENT] = {['/*'] = 1, ['*/'] = -1},
_patterns = {'[{}]', '/%*', '%*/'}
}
The first assignment states that any ‘{’ or ‘}’ that the lexer recognized as
an OPERATOR
token is a fold point. The integer 1
indicates the match is
a beginning fold point and -1
indicates the match is an ending fold point.
Likewise, the second assignment states that any “/*” or “*/” that the lexer
recognizes as part of a COMMENT
token is a fold point. Any occurences of
these characters outside their defined tokens (such as in a string) would not
be considered a fold point. Finally, every _foldsymbols
table must have a
_patterns
field that contains a list of Lua patterns that match fold
points. If the lexer encounters text that matches one of those patterns, the
matched text is looked up in its token’s table to determine whether or not it
is a fold point. In the example above, the first Lua pattern matches any ‘{’
or ‘}’ characters. When the lexer comes across one of those characters, it
checks if the match is an OPERATOR
token. If so, the match is identified as
a fold point. It is the same idea for the other patterns. (The ‘%’ is in the
other patterns because ‘*’ is a special character in Lua patterns and it
must be escaped.) How are fold keywords specified? Here is an example for
Lua:
M._foldsymbols = {
[l.KEYWORD] = {
['if'] = 1, ['do'] = 1, ['function'] = 1,
['end'] = -1, ['repeat'] = 1, ['until'] = -1
},
_patterns = {'%l+'},
}
Any time the lexer encounters a lower case word, if that word is a KEYWORD
token and in the associated list of fold points, it is identified as a fold
point.
If your lexer needs to do some additional processing to determine if a fold
point has occurred on a match, you can assign a function that returns an
integer. Returning 1
or -1
indicates the match is a fold point. Returning
0
indicates it is not. For example:
local function fold_strange_token(text, pos, line, s, match)
if ... then
return 1 -- beginning fold point
elseif ... then
return -1 -- ending fold point
end
return 0
end
M._foldsymbols = {
['strange_token'] = {['|'] = fold_strange_token},
_patterns = {'|'}
}
Any time the lexer encounters a ‘|’ that is a “strange_token”, it calls the
fold_strange_token
function to determine if ‘|’ is a fold point. These
kinds of functions are called with the following arguments: the text to fold,
the position of the start of the current line in the text to fold, the text
of the current line, the position in the current line the matched text starts
at, and the matched text itself.
Using Lexers
Textadept
Put your lexer in your ~/.textadept/lexers/ directory so it will not be overwritten when upgrading Textadept. Also, lexers in this directory override default lexers. Thus, a user lua lexer would be loaded instead of the default lua lexer. This is convenient if you wish to tweak a default lexer to your liking. Then add a mime-type for your lexer if necessary.
SciTE
Create a .properties file for your lexer and import
it in either your
SciTEUser.properties or SciTEGlobal.properties. The contents of the
.properties file should contain:
file.patterns.[lexer_name]=[file_patterns]
lexer.$(file.patterns.[lexer_name])=[lexer_name]
where [lexer_name]
is the name of your lexer (minus the .lua extension)
and [file_patterns]
is a set of file extensions matched to your lexer.
Please note any styling information in .properties files is ignored. Styling information for Lua lexers is contained in your theme file in the lexers/themes/ directory.
Considerations
Performance
There might be some slight overhead when initializing a lexer, but loading a
file from disk into Scintilla is usually more expensive. On modern computer
systems, I see no difference in speed between LPeg lexers and Scintilla’s C++
ones. Lexers can usually be optimized for speed by re-arranging rules in the
_rules
table so that the most common rules are matched first. Do keep in
mind the fact that order matters for similar rules.
Limitations
Embedded preprocessor languages like PHP are not completely embedded in their parent languages in that the parent’s tokens do not support start and end rules. This mostly goes unnoticed, but code like
<div id="<?php echo $id; ?>">
or
<div <?php if ($odd) { echo 'class="odd"'; } ?>>
will not style correctly.
Troubleshooting
Errors in lexers can be tricky to debug. Lua errors are printed to
io.stderr
and _G.print()
statements in lexers are printed to io.stdout
.
Running your editor from a terminal is the easiest way to see errors as they
occur.
Risks
Poorly written lexers have the ability to crash Scintilla (and thus its containing application), so unsaved data might be lost. However, these crashes have only been observed in early lexer development, when syntax errors or pattern errors are present. Once the lexer actually starts styling text (either correctly or incorrectly, it does not matter), no crashes have been observed.
Acknowledgements
Thanks to Peter Odding for his lexer post on the Lua mailing list that inspired me, and thanks to Roberto Ierusalimschy for LPeg.
Fields
CLASS
(string)
The token name for class tokens.
COMMENT
(string)
The token name for comment tokens.
CONSTANT
(string)
The token name for constant tokens.
DEFAULT
(string)
The token name for default tokens.
ERROR
(string)
The token name for error tokens.
FOLD_BASE
(number)
The initial (root) fold level.
FOLD_BLANK
(number)
Flag indicating that the line is blank.
FOLD_HEADER
(number)
Flag indicating the line is fold point.
FUNCTION
(string)
The token name for function tokens.
IDENTIFIER
(string)
The token name for identifier tokens.
KEYWORD
(string)
The token name for keyword tokens.
LABEL
(string)
The token name for label tokens.
NUMBER
(string)
The token name for number tokens.
OPERATOR
(string)
The token name for operator tokens.
PREPROCESSOR
(string)
The token name for preprocessor tokens.
REGEX
(string)
The token name for regex tokens.
STRING
(string)
The token name for string tokens.
TYPE
(string)
The token name for type tokens.
VARIABLE
(string)
The token name for variable tokens.
WHITESPACE
(string)
The token name for whitespace tokens.
alnum
(pattern)
A pattern matching any alphanumeric character (A-Z
, a-z
, 0-9
).
alpha
(pattern)
A pattern matching any alphabetic character (A-Z
, a-z
).
any
(pattern)
A pattern matching any single character.
any_char
(pattern)
A DEFAULT
token matching any single character, useful in a fallback rule
for a grammar.
ascii
(pattern)
A pattern matching any ASCII character (0
..127
).
cntrl
(pattern)
A pattern matching any control character (0
..31
).
dec_num
(pattern)
A pattern matching a decimal number.
digit
(pattern)
A pattern matching any digit (0-9
).
extend
(pattern)
A pattern matching any ASCII extended character (0
..255
).
float
(pattern)
A pattern matching a floating point number.
graph
(pattern)
A pattern matching any graphical character (!
to ~
).
hex_num
(pattern)
A pattern matching a hexadecimal number.
integer
(pattern)
A pattern matching a decimal, hexadecimal, or octal number.
lower
(pattern)
A pattern matching any lower case character (a-z
).
newline
(pattern)
A pattern matching any newline characters.
nonnewline
(pattern)
A pattern matching any non-newline character.
nonnewline_esc
(pattern)
A pattern matching any non-newline character excluding newlines escaped with ‘\’.
oct_num
(pattern)
A pattern matching an octal number.
print
(pattern)
A pattern matching any printable character (space to ~
).
punct
(pattern)
A pattern matching any punctuation character not alphanumeric (!
to /
,
:
to @
, [
to '
, {
to ~
).
space
(pattern)
A pattern matching any whitespace character (\t
, \v
, \f
, \n
, \r
,
space).
style_bracebad
(table)
The style used for unmatched brace characters.
style_bracelight
(table)
The style used for highlighted brace characters.
style_calltip
(table)
The style used by call tips if buffer.call_tip_use_style
is set.
Only the font name, size, and color attributes are used.
style_class
(table)
The style typically used for class definitions.
style_comment
(table)
The style typically used for code comments.
style_constant
(table)
The style typically used for constants.
style_controlchar
(table)
The style used for control characters. Color attributes are ignored.
style_default
(table)
The style all styles are based off of.
style_definition
(table)
The style typically used for definitions.
style_embedded
(table)
The style typically used for embedded code.
style_error
(table)
The style typically used for erroneous syntax.
style_function
(table)
The style typically used for function definitions.
style_identifier
(table)
The style typically used for identifier words.
style_indentguide
(table)
The style used for indentation guides.
style_keyword
(table)
The style typically used for language keywords.
style_label
(table)
The style typically used for labels.
style_line_number
(table)
The style used for all margins except fold margins.
style_nothing
(table)
The style typically used for no styling.
style_number
(table)
The style typically used for numbers.
style_operator
(table)
The style typically used for operators.
style_preproc
(table)
The style typically used for preprocessor statements.
style_regex
(table)
The style typically used for regular expression strings.
style_string
(table)
The style typically used for strings.
style_tag
(table)
The style typically used for markup tags.
style_type
(table)
The style typically used for static types.
style_variable
(table)
The style typically used for variables.
style_whitespace
(table)
The style typically used for whitespace.
upper
(pattern)
A pattern matching any upper case character (A-Z
).
word
(pattern)
A pattern matching a typical word starting with a letter or underscore and then any alphanumeric or underscore characters.
xdigit
(pattern)
A pattern matching any hexadecimal digit (0-9
, A-F
, a-f
).
Functions
color
(r, g, b)
Creates and returns a Scintilla color from r, g, and b string hexadecimal color components.
Parameters:
r
: The string red hexadecimal component of the color.g
: The string green hexadecimal component of the color.b
: The string blue hexadecimal component of the color.
Usage:
local red = color('FF', '00', '00')
Return:
- integer color for Scintilla.
delimited_range
(chars, escape, end_optional, balanced, forbidden)
Creates and returns a pattern that matches a range of text bounded by chars characters. This is a convenience function for matching more complicated delimited ranges like strings with escape characters and balanced parentheses. escape specifies the escape characters a range can have, end_optional indicates whether or not unterminated ranges match, balanced indicates whether or not to handle balanced ranges like parentheses and requires chars to be composed of two characters, and forbidden is a set of characters disallowed in ranges such as newlines.
Parameters:
chars
: The character(s) that bound the matched range.escape
: Optional escape character. This parameter maynil
or the empty string to indicate no escape character.end_optional
: Optional flag indicating whether or not an ending delimiter is optional or not. Iftrue
, the range begun by the start delimiter matches until an end delimiter or the end of the input is reached.balanced
: Optional flag indicating whether or not a balanced range is matched, like the “%b” Lua pattern. This flag only applies if chars consists of two different characters (e.g. “()”).forbidden
: Optional string of characters forbidden in a delimited range. Each character is part of the set. This is particularly useful for disallowing newlines in delimited ranges.
Usage:
local dq_str_noescapes = l.delimited_range('"', nil, true)
local dq_str_escapes = l.delimited_range('"', '\\', true)
local unbalanced_parens = l.delimited_range('()', '\\')
local balanced_parens = l.delimited_range('()', '\\', false, true)
Return:
- pattern
See also:
embed_lexer
(parent, child, start_rule, end_rule)
Embeds child lexer in parent with start_rule and end_rule, patterns that signal the beginning and end of the embedded lexer, respectively.
Parameters:
parent
: The parent lexer.child
: The child lexer.start_rule
: The pattern that signals the beginning of the embedded lexer.end_rule
: The pattern that signals the end of the embedded lexer.
Usage:
l.embed_lexer(M, css, css_start_rule, css_end_rule)
l.embed_lexer(html, M, php_start_rule, php_end_rule)
l.embed_lexer(html, ruby, ruby_start_rule, ruby_end_rule)
fold
(text, start_pos, start_line, start_level)
Folds text, a chunk of text starting at position start_pos on line number
start_line with a beginning fold level of start_level in the buffer.
Called by the Scintilla lexer; do not call from Lua. If the current lexer
has a _fold
function or a _foldsymbols
table, it is used to perform
folding. Otherwise, if a fold.by.indentation
property is set, folding by
indentation is done.
Parameters:
text
: The text in the buffer to fold.start_pos
: The position in the buffer text starts at.start_line
: The line number text starts on.start_level
: The fold level text starts on.
Return:
- table of fold levels.
fold_line_comments
(prefix)
Returns a fold function, to be used within the lexer’s _foldsymbols
table,
that folds consecutive line comments beginning with string prefix.
Parameters:
prefix
: The prefix string defining a line comment.
Usage:
[l.COMMENT] = {['--'] = l.fold_line_comments('--')}
[l.COMMENT] = {['//'] = l.fold_line_comments('//')}
get_fold_level
(line_number)
Returns the fold level for line number line_number.
This level already has SC_FOLDLEVELBASE
added to it, so you do not need to
add it yourself.
Parameters:
line_number
: The line number to get the fold level of.
Return:
- integer fold level
get_indent_amount
(line_number)
Returns the amount of indentation the text on line number line_number has.
Parameters:
line_number
: The line number to get the indent amount of.
Return:
- integer indent amount
get_property
(key, default)
Returns the integer property value associated with string property key, or default.
Parameters:
key
: The string property key.default
: Optional integer value to return if key is not set.
Return:
- integer property value
get_style_at
(pos)
Returns the string style name and style number at position pos in the buffer.
Parameters:
pos
: The position in the buffer to get the style for.
Return:
- style name
- style number
last_char_includes
(s)
Creates and returns a pattern that matches any previous non-whitespace character in s and consumes no input.
Parameters:
s
: String character set like one passed tolpeg.S()
.
Usage:
local regex = l.last_char_includes('+-*!%^&|=,([{') * l.delimited_range('/', '\\')
Return:
- pattern
lex
(text, init_style)
Lexes a chunk of text text with an initial style number of init_style.
Called by the Scintilla lexer; do not call from Lua. If the lexer has a
_LEXBYLINE
flag set, the text is lexed one line at a time. Otherwise the
text is lexed as a whole.
Parameters:
text
: The text in the buffer to lex.init_style
: The current style. Multiple-language lexers use this to determine which language to start lexing in.
Return:
- table of token names and positions.
load
(lexer_name)
Initializes or loads lexer lexer_name and returns the lexer object. Scintilla calls this function to load a lexer. Parent lexers also call this function to load child lexers and vice-versa.
Parameters:
lexer_name
: The name of the lexing language.
Return:
- lexer object
nested_pair
(start_chars, end_chars, end_optional)
Similar to delimited_range()
, but allows for multi-character, nested
delimiters start_chars and end_chars. end_optional indicates whether or
not unterminated ranges match.
With single-character delimiters, this function is identical to
delimited_range(start_chars..end_chars, nil, end_optional, true)
.
Parameters:
start_chars
: The string starting a nested sequence.end_chars
: The string ending a nested sequence.end_optional
: Optional flag indicating whether or not an ending delimiter is optional or not. Iftrue
, the range begun by the start delimiter matches until an end delimiter or the end of the input is reached.
Usage:
local nested_comment = l.nested_pair('/*', '*/', true)
Return:
- pattern
See also:
starts_line
(patt)
Creates and returns a pattern that matches pattern patt only at the beginning of a line.
Parameters:
patt
: The LPeg pattern to match on the beginning of a line.
Usage:
local preproc = token(l.PREPROCESSOR, #P('#') * l.starts_line('#' * l.nonnewline^0))
Return:
- pattern
style
(style_table)
Creates and returns a Scintilla style from the given table of style properties.
Parameters:
style_table
: A table of style properties:font
(string) The name of the font the style uses.size
(number) The size of the font the style uses.bold
(bool) Whether or not the font face is bold.italic
(bool) Whether or not the font face is italic.underline
(bool) Whether or not the font face is underlined.fore
(number) The foregroundcolor
of the font face.back
(number) The backgroundcolor
of the font face.eolfilled
(bool) Whether or not the background color extends to the end of the line.case
(number) The case of the font (1 = upper, 2 = lower, 0 = normal).visible
(bool) Whether or not the text is visible.changeable
(bool) Whether the text changable or read-only.hotspot
(bool) Whether or not the text is clickable.
Usage:
local style_bold_italic = style{bold = true, italic = true}
local style_grey = style{fore = l.colors.grey}
Return:
- style table
See also:
token
(name, patt)
Creates and returns a token pattern with the name name and pattern patt.
If name is not a predefined token name, its style must be defined in the
lexer’s _tokenstyles
table.
Parameters:
name
: The name of token. If this name is not a predefined token name, then a style needs to be assiciated with it in the lexer’s_tokenstyles
table.patt
: The LPeg pattern associated with the token.
Usage:
local ws = token(l.WHITESPACE, l.space^1)
local annotation = token('annotation', '@' * l.word)
Return:
- pattern
word_match
(words, word_chars, case_insensitive)
Creates and returns a pattern that matches any word in the set words
case-sensitively, unless case_insensitive is true
, with the set of word
characters being alphanumerics, underscores, and all of the characters in
word_chars.
This is a convenience function for simplifying a set of ordered choice word
patterns.
Parameters:
words
: A table of words.word_chars
: Optional string of additional characters considered to be part of a word. By default, word characters are alphanumerics and underscores (“%w_” in Lua). This parameter may benil
or the empty string to indicate no additional word characters.case_insensitive
: Optional boolean flag indicating whether or not the word match is case-insensitive. The default isfalse
.
Usage:
local keyword = token(l.KEYWORD, word_match{'foo', 'bar', 'baz'})
local keyword = token(l.KEYWORD, word_match({'foo-bar', 'foo-baz', 'bar-foo', 'bar-baz', 'baz-foo', 'baz-bar'}, '-', true))
Return:
- pattern
Tables
colors
Table of common colors for a theme. This table should be redefined in each theme.
lexer
Individual lexer fields.
Fields:
_NAME
: The string name of the lexer in lowercase._rules
: An ordered list of rules for a lexer grammar. Each rule is a table containing an arbitrary rule name and the LPeg pattern associated with the rule. The order of rules is important as rules are matched sequentially. Ensure there is a fallback rule in case the lexer encounters any unexpected input, usually using the predefinedl.any_char
token. Child lexers should not use this table to access and/or modify their parent’s rules and vice-versa. Use the_RULES
table instead._tokenstyles
: A list of styles associated with non-predefined token names. Each token style is a table containing the name of the token (not a rule containing the token) and the style associated with the token. The order of token styles is not important. It is recommended to use predefined styles or color-agnostic styles derived from predefined styles to ensure compatibility with user color themes._foldsymbols
: A table of recognized fold points for the lexer. Keys are token names with table values defining fold points. Those table values have string keys of keywords or characters that indicate a fold point whose values are integers. A value of1
indicates a beginning fold point and a value of-1
indicates an ending fold point. Values can also be functions that return1
,-1
, or0
(indicating no fold point) for keys which need additional processing. There is also a required_pattern
key whose value is a table containing Lua pattern strings that match all fold points (the string keys contained in token name table values). When the lexer encounters text that matches one of those patterns, the matched text is looked up in its token’s table to determine whether or not it is a fold point._fold
: If this function exists in the lexer, it is called for folding the document instead of using_foldsymbols
or indentation._lexer
: For child lexers embedding themselves into a parent lexer, this field should be set to the parent lexer object in order for the parent’s rules to be used instead of the child’s._RULES
: A map of rule name keys with their associated LPeg pattern values for the lexer. This is constructed from the lexer’s_rules
table and accessible to other lexers for embedded lexer applications like modifying parent or child rules._LEXBYLINE
: Indicates the lexer matches text by whole lines instead of arbitrary chunks. The default value isfalse
. Line lexers cannot look ahead to subsequent lines.