Spirit X3 – Separate Lexer Part I

Back in this post, I said about Spirit ..

…it would be very feasible to write a lexical analyzer that makes a token object stream available via a ForwardIterator and write your grammar rules based on that.

But is it ? really?

The short answer is – Yes its feasible, but probably not a good idea.

The long answer is the journey we’ll take on the next two posts.

Overall Design

The first part will be a stand-alone lexer (named – of course – Lexer), that will take a pair of iterators to a character stream and turn it into a stream of Tokens. The token stream will be available through an iterator interface. We’ll look at in more detail in a moment.

The Spirit framework can be thought of as having 4 categories of classes/objects:

  • rules
  • combinators (“|”, “>>”, etc)
  • directives (lexeme and friends)
  • primitive parsers (char_, etc)

Only the primitive parsers truly care about the type you get when dereferencing the iterators. Unfortunately, they really care (and rightly so). So, that means we will need to write replacements for them. Fortunately  we do not have to replace any of the rule or combinator infrastructure or this would be undoable – even on a dare.

To recap – we will be writing the following classes:

  • Lexer – the tokenizer
  • Token – the class that represents the lexed tokens.
  • tok – a primitive Token parser.

We will look at Token and Lexer in this post and tok in the next.

All code can be found in the GitHub repository

Token Class

Looking at lexer.hpp, the first thing we see is the enum TokenType. No surprises here except possibly the fact that we need a special tokEOF to signal the end of the end input. This will also act as the marker for the end iterator.

struct token is also fairly simple. It will hold the TokenType, iterators to where in the input it was found and the actual lexeme. The lexeme won’t be of much use except in the case of tokIdent.

I intentionally made these small so that we could pass tokens around by value most of the time.  The embedded iterators are not really necessary for this project, but would be if this were fleshed out more with good parse error reporting.

The most important things are the istype member function and mkend()istype() will be what the parser uses to decide if there is a match. mkend() is a static helper to generate an EOF token.

Lexer Class

Lets start off in the header file – lexer.hpp.

To keep this simple, I decided to hardcode the fact that we are using std::string::const_iterators as input.

The lexer class itself is simply a shell. It holds on to the input iterators and uses them to create it’s own iterators as requested. begin(), end() are the only reason the outer class exists.

Lexer::iterator

Lets look at this in some detail.

using self_type = iterator;
using value_type = token;
using reference = value_type &;
using pointer = value_type *;
using iterator_category = std::forward_iterator_tag;
using difference_type = int;

These types are require to allow our iterator to play nice with STL algorithm. The STL templates consult these typedefs to know what types to instantiate for temporary values, etc. We could use these to make the lexer class hyper-general and match any value type for which operator== is defined.

But lets not.

self_type operator++();
self_type operator++(int junk);
reference operator*();
pointer operator->();
bool operator==(const self_type& rhs) const { return m_curr_tok == rhs.m_curr_tok; };
bool operator!=(const self_type& rhs) const { return !(m_curr_tok == rhs.m_curr_tok); };

These are the operators that are needed to make it a ForwardIterator – increment and dereference and equality.

Note, that in general, you will also want to supply a const_iterator as well. The only difference would be that operator* and operator-> would return const versions.

Now lets head over to the implementation – lexer.cpp

Skip_space

This is a utility that – as the name on the box says – skips spaces. It also helpfully returns an indication if the end of input was reached. In an effort to be somewhat standard, isspace is used to decide whether a character needs to be skipped.

get_next_token

Here is the heart of the lexer. get_next_token returns by value the next token that it can get out of the input or return tokEOF if it reaches the end of input or can make a valid token out of the current position.

After skipping spaces, it checks to see if the current character is a “punctuation” token – in this case a semicolon or a parenthesis.

If not, it gathers up the next batch of consecutive alphanumeric characters and checks to see if they are a keyword. If not, it brands it an identifier.

And that’s about it for the lexer.

Next time, we’ll look at the parsing primitive and put it all together.

Advertisements

Static Exceptions

Dynamic Exceptions have their flaws. Herb Sutter has proposed a replacement known as Static Exceptions. Lets look at it a bit.

Before we do, we need to look at the C+11 feature std::error_code

std::error_code and Friends

Anyone who has done any coding in C knows about good old errno, the global int that many system functions will set to signal a problem.  This, of course, has many problems, not the least of which is that different platforms could and did use different integer values to represent the same error.

To bring some order to the chaos, std::error_code was added along with its friend std::error_category.

An error code is actually two numbers – an integer saying which exact error and a “category” or domain for that error. Thus the math related errors and the filesystem errors could have the same integer value, but different domains. A domain or category is nothing but a pointer to a singleton object.

For a bit more, go look at the cplusplus.com writeup as well as a tutorial on creating your own error codes from the folks behind Outcome.  And here is another writeup on a use of custom error codes.

For our purposes, std::error_code has four really nice properties:

  • It is small – the size two pointers. It could in theory be passed around in cpu registers.
  • Creating one cannot possibly throw an exception.
  • Copying and/or moving can be done with a memcpy or just two memory read/writes.
  • It does not require any RTTI – no dynamic casting is required – only (possibly) a static_cast between integer types.

Dynamic Exceptions considered harmful

Sutter does a much better job than I can of enumerating the problems with the current exception system. So go read the paper.

And error returns schemes such as Expected or Outcome aren’t much better.

Static Exceptions

Sutters proposal is to do something like the following.

Introduce a new keyword throws.

IF you define a function as :

T my_function() throws;

Then behind the scenes the compiler will act as if the function was defined.

variant<T, std::error_code> my_function();

In the body of the function anything that looks like:

throw e;

get translated to a simple

return e;

And at the call site

try {
    x = my_function();
} catch (e) {
    /* try to recover */
}

Will get translated into something like:

x = my_function();
if (compiler_magic::is_error(x)) {
     /* try to recover */
}

This eliminates the hand-rolled “if checks” that have to be written to use something like Outcome. And it propagates. If you don’t handle the call there will still be the check, but it will have a simple return to move the exception outward.

The paper is filled with more details about the interplay between the proposed mechanism and the current exception system, noexcept, and other details the language lawyers need to care about.

Onyx

I have decided to make this the standard of exception handling in Onyx. There are details to be worked out. In particular in the early stages, I will literally have to rewrite the return types in order to “reduce” Onyx to C++.

But it will be fun to try out.

Identifier Parsing in Boost.Spirit X3 – custom parser

This time around, we will use a custom parser to handle the keywords.

I really hadn’t planned on making this a series, but there you go. This will be the last – I think.

Upgrades

I started from the code from the last post, but did make a minor adjustment. I made underbar (‘_’) a valid character in an identifier.

auto const ualnum = alnum | char_('_');
auto const reserved = lexeme[symtab >> !ualnum];
auto const ident = lexeme[ *char_('_') >> alpha >> *ualnum ] - reserved;

Custom Parser

Parsers in X3 are classes that have a parse template function with a specific signature.

template<typename Iterator, typename Context, typename RContext, typename Attribute>
    bool parse(Iterator &first, Iterator const& last, Context const& context, 
               RContext const& rcontext, Attribute& attr) const

first and last are input iterators that contain the stream of characters (or whatever the iterators are iterating) to match.

context and rcontext contain various client and system supplied information.

attr is the attribute – the value that the parser will pass back on success. In our case, this will just be the keyword itself.

And that really is it. The code itself is straightforward and pretty much mirrors what the “standard” version does – match the given string, then check that next character (if it exists) is not a letter, number, or underbar.

After that, it is just a matter of using the new parser keyword in the mkkw lambda.

Now that wasn’t bad, was it?

Here is the finished code.

 

Identifier Parsing Redux

The ink hadn’t dried[1] on my Identifier Parsing post when I realized that there was indeed a better way to handle multiple keywords.

In that post I stated that a symbols<T> parser would not help because it suffered the same problem as lit(). Which is true.

What I missed was that, of course, you could use the same trick with symbols as you did with lit() to make it work.

Like this:

symbols<int>symtab;
symtabs.add("var")("func");
auto const reserved = lexeme[symtab >> !alnum];
auto const ident = lexeme[  +alnum - reserved ]

That does what we need.

AND, we can fix up our lambda to automatically register new keywords.

auto mkkw = [](std::string kw) {
    symtab.add(kw);
    return lexeme[x3::lit(kw) >> !alnum];
};

Now, we can happily make up keywords and keep the rest of the parser in sync.

I will place a V4 in the Github repository.


 

[1] Yea, I know. Work with me.

Identifier Parsing in Boost.Spirit X3

In Boost.Spirit X3, parsing identifiers is a bit tricky.

If you are used to the distinction between lexical analysis and syntactical analysis (as I am), Spirit can take some getting used. Lexical analysis is done in the same grammar as the syntactical analysis. So the ubiquitous IDENT token type is now a grammar rule.

To be sure, it doesn’t have to be this way. Spirit parsers work on iterator pairs, so it would be very feasible to write a lexical analyzer that makes a token object stream available via a ForwardIterator and write your grammar rules based on that.

That has some benefits. It definitely would keep the grammar a bit cleaner since you don’t have to write rules around lexical issues. And you don’t have to care about the skip parser (the user-supplied grammar for recognizing spaces, etc that you don’t want to be considered as important input).

But the separation also has a cost. You will frequently have to create a feedback loop where the parser must give guidance to the lexer in order to reduce ambiguity in the grammar. Which means that the two are going have to share a good bit of state. So, if they are going to be all in each other’s business anyway, you might as well use Spirit to write both.

But using Spirit for both ALSO has a bit of cost. Things can get tricky. Lets take a look.

A First Simple Attempt

Below is the code for a a very simple grammar – input is the keyword “var” followed by an identifier – that duo can be repeated as many time as you want. Everything is space separated.

Note that the code is also available via Github

Lets go to the bottom for a minute and look at main(). This won’t change so we’ll get it out of the way. Most of this is self explanatory, but I want to mention two things.

The first is the if/else if/else. In general, this is how you will need to check the outcome of your parse. The return value will be true unless special error handling says otherwise. Possibly we will delve into that in a future post. So, the main way to know if you got what you wanted is to see if all the input was consumed.

Now let’s look at the call to the Spirit parsing engine.

bool r = x3::phrase_parse(iter, end_iter, program, x3::ascii::space);

The two iterators say what to parse, program is the parser to use, and space is the skip parser.

The skip parser is a special parser that Spirit calls between calls to other parsers. So, if you have a line

A >> B >> *C

then, Spirit will act as if you has specified

skip >> A >> skip >> B >> *(skip >> C)

This will be important in a few paragraphs.

So, now the actual parser:

auto const kw_var = x3::lit("var");
auto const ident = lexeme[ alpha >> *alnum ];
auto const stmt = kw_var >> ident;
auto const program = +stmt;

A “program” is one or statements each of which is the the keyword “var” followed by an identifier which made up of one or more alphanumeric characters.

Le me talk about the “lexeme” parser for a minute. Remember that Spirit inserts a call to the skip parser between invocations of other parsers? If we had written ident as simply

auto const ident = alpha >> *alnum;

that would have been transformed into

auto const ident = alpha >> *( skip >> alnum );

The effect would be that the ident parser would happily consume all the input (assuming no punctuation characters) since “space” (our skip parser) soaks up all the spaces, tabs, etc.

“lexeme” turns off skip processing for the parser it encloses. So, with lexeme, ident will stop at the first non-alphanumeric character – including spaces. lexeme is a way to say that spaces (or what ever the skip parser normally consumes) is important.

So, does our parser work? It does! It even fails as expected.

$ ./parse_ident_v1 "var foo var bar"
parsing : var foo var bar
Good input
$ ./parse_ident_v1 "var foo var"
parsing : var foo var
Failed: didn't parse everything
stopped 3 characters from the end ( 'v' )

Or does it?

$ ./parse_ident_v1 "varfoo varbar"
parsing : varfoo varbar
Good input

A Second Attempt

The problem is that the string parser “lit” is, well, too literal. We asked it to check if the three characters ‘v’, ‘a’, ‘r’ were in the input stream – and they were. What we really wanted is that those characters were in the stream AND they were not followed by anything that we allow in an identifier.

auto const kw_var = lexeme[x3::lit("var") >> !alnum]

Note that we now have to use “lexeme” otherwise we would skip spaces before checking to see if there was a alphanumeric character.

$ ./parse_ident_v2 "varfoo varbar"
parsing : varfoo varbar
Failed: didn't parse everything
stopped 13 characters from the end ( 'v' )

Much better. Note that it stopped at the “v” in “varfoo”.

But we still have a problem.

$ ./a.out "var var"
parsing : var var
Good input

Hmmm..

A Third Attempt

If the language you are parsing doesn’t reserve keywords, then we’re done. But most languages have at least a few reserved words that cannot be used as general identifiers.

The first fix actually helps us here as well.  The fix is as easy as :

auto const ident = lexeme[ (alpha >> *alnum) - kw_var ]

Note that the “except” operator ( “-” my term) may not work the way you think. What does NOT happen is that the first parser does its thing and then the “except” parser checks to see if it matches and fails if it does.

The order of operations is reversed from what might be implied by the expression. The “except” parser runs first. If it matches, then the whole expression fails. If it does NOT match, then the primary parser ( +alnum in this case) runs.

Because of this, something like:

auto const ident = lexeme[ +alnum - lit("var") ]

would fail to match the string “variable” for the same reason that “varfoo” succeeded in our first attempt.

With that in mind let’s do one more thing – let’s cater to the case of multiple keywords.

Multiple Keywords

It would be tempting – after glancing at the Spirit X3 docs – to try to use x3::symbols for this. It would indeed be nice if it worked:

x3:symbols symtab.add("var")("func");
auto const kw_var = lexeme[ lit("var") >> !alnum];
auto const kw_func = lexeme[lit("func") >> !alnum];
auto const ident = lexeme[ (alpha >> *alnum) - symtab ]

Unfortunately, this suffers from “lit”-erally the same problem. symtab is (for parsing purposes) equivalent to:

symtab = lit("var") | lit("func")

So the string “var variable” would – again – be rejected.

We will need to define a new parser “reserved” that wraps up all the reserved words.

auto const kw_var = lexeme[ lit("var") >> !alnum];
auto const kw_func = lexeme[lit("func") >> !alnum];

auto const reserved = kw_var | kw_func;

auto const ident = lexeme[ (alpha >> *alnum) - reserved ]

And presto!

So, here is the final code. I have also made a little helper lambda to define keywords. I tried to figure out a way to have it also add the new keyword to the reserved parser, but couldn’t come up with anything. I’m happy to listen to any suggestions.

Final Thoughts

Doing lexical analysis in Spirit is not much different for actually writing one freehand. The same thought processes are involved. Just be aware of the skip processor and you’ll do fine.

The Onyx Project

I have decided to design, and build my own computer language. It will be called Onyx.

I agree, that is a pretty big task.

So we will take a bit at time.

Design Criteria

I don’t really have a whole lot yet. You might think of onyx as C++ EXCEPT – meaning, it acts like C++ EXCEPT these differences. Over time the list will grow and I’ll turn it into a full-feature language spec.

Here are the “excepts” so far.

Variable/Function declaration

Onyx has a postfix declaration for the type e.g.

var foo int;                         // int foo;
var foo * ( * int, *() ) * int;      // int* (*foo)(int*, *());
var baz * [] * int;                  // int *(*baz)[]; I think.
// functions are similar
func doit ( a * int, b MyClass *) YourClass *

Yes, there will be keywords introducing variables and function.

I haven’t decided what to do about const yet.

var foo const int = 3;
// Q: allow abbreviation as : ??
const foo int = 3;

Classes

Haven’t really given this much thought yet. Points up for thought:

  • different inheritance types (public/protected/private) or not?
  • What are the visibility levels – all three?? I’m leaning towards only two – public and protected (though I will probably relabel them).
  • virtual inheritance?

Of course, the biggest questions is – can objects live on the stack (Like C++) or only as references/pointers (like java). I want objects on the stack, but I want to see if we can do something about the awful mess of T vs T* vs T& vs T&&.

My current thinking is that there are no pointers – only T and &T, but that references would be “spelled” “*”.  So getting to members through a pointer would use dot notation.

var a * MyClass = &anObj; // fine initializing a to reference anObj
var b * MyClass = &otherObj;
a.m1 = b.m1 + 3;  // update a single member
a = b;            // copy - otherObj now has anObj's data.
a = &b;           // update a to reference otherObj (not b)
                  // a would have to have type * * MyClass for that.

Move semantics would definitely be baked in from the being.

Operator Overloading

Indeed. But I’m thinking of stealing a page from python’s playbook  and have special names rather than trying to use the actual lexical symbol. To help, however, they would be names that would otherwise not be valid function identifiers. Maybe something like this..

class MyClass {
     func op.assign() {};
     func op.plus() {};
     func op.create() {};  // constructor
     func op.destroy() {}; // destructor

The downside of this is that it would be a little weird redefining op.plus to do string concatenation.

Templates

I’d like to generalize the if constexpr paradigm to include the whole function/class, so that its entire existence can depend on an “if” of some sort.

tif isarraytype(T) {
   func foo(T) int ( /* some stuff */ };
} else {
   func foo(T) int { /* different stuff */ };
}

This may not be doable or even make sense. How does it play with normal overloading?

Where are the template parameter lists placed? I’m think of putting them at the class keyword rather than the name.

Module System

Modeled after python. But with some compiler support for “Module providers” so that the name of the file would not have to be tied to the file name (e.g. for versioning), but that would be the default.

Modules would be compiled once, of course. only things specifically marked would be exported. Would be nice to have a tool that could generate a reference guide to a module based on its exported symbol table.

Phase I implementation

The initial parser/compiler will be built in C++ use Boost.Spirit X3. This phase will be more of a translator that will spit out C++ code to implement the semantics. Module files will be some dense representation of the parse tree.

Phase II implementation

This will still utilize Spirit, but will target LLVM IR as the target output.

Conclusion

Thanks for making this far. If you have any thoughts on any of the above, I would be happy to hear them.

There is obviously a ton of work to do. And not much spare time. So, this will definitely be a slow leisurely endeavor.