You may have a look at Boost Spirit, which allows you to write easily lexical analysers (Boost.Lex), and parsers (Boost.Qi) . It has an interesting approach consisting of defining the syntax/grammar directly in the C++ code instead of using a separated grammar file. It's portable, standard, self-contained and very elegant.
You could consided Flex and Bison if your language is going to evolve into something more complex. They use the same kind of input files as Lex & Yacc, which are their older Unix equivalents. The advantage of these tools is that there is plenty of litterature. The inconvenient, is that they generate code by mixing their skeleton code with pieces that you give in the grammar files. Hence, it's more complex to master and maintain.
But in your special case, you have a very simple language with only a couple of tokens and apparently a simple "LL(1) grammar". (e.g. the parser needs to read a single token ahead to determine without ambiguity what it is going to parse). It would be easy to craft your own code, eventuell using <regex>
to ease token scanning, and creating objects that correspond to your language structures.