Posted on 2017-03-07 by Chris Dornan

Background

Just over 3 years ago I filed this issue on one of our main repositories:

After problems we have seen with regex-compat (which uses the native, platform libraries) it is maybe best to anticipate further problems and switch over to the -tdfa variants, which should be compatible (except where the platform libraries deviate from POSIX in their eccentric ways). atlas.cabal and api-tools.cabal to be adjusted.

When we realised what a mess the native libraries were in (we uncovered the problem trying to work out why our Linux and macOS builds where diverging – see Chris Kuklewicz’s notes), there was great rejoicing that we had extricated ourselves from this swamp in switching to a solid, properly engineered library that had been tested and benchmarked. Since that day we have had no technical problems with our chosen regex package.

Except that I noticed at the end of last year that we were hardly using it. In particular I noticed that every time I encountered an application that needed regular expressions I would look around for documentation and find myself back with Bryan O’Sullivan’s venerable Tutorial (that celebrated its 10 birthday this Monday). I would either find a way to not use regular expressions or learn just enough to write a little utility function in the module where it was needed and forget all about them until the need surfaced again, in which case I would typically find a way not use them.

Late last year I decided enough already, and spent the break getting to the bottom of why regular expressions in Haskell were such a chore. The result is the regex package. But before we get to that, a little more about the problem.

The Problem

The problem seems to be that Haskellers either,

see good regex support as a priority for denizens of lesser, impoverished programming languages that are not very good at proper grown up parsing and therefore need ‘over-elaborate’ regex support to cover their programming language’s shortcomings (note) or,

programmers that have experienced a productive relationship relationship with said programming systems and know the value of a good regex toolkit.

Note: This is satire, well known to cause disturbance on the internet, but rest assured that I am poking some gentle fun at Haskellers of which I number myself.

The former tend to ignore the pleas of the latter to provide the kind of high-quality toolkit that they have seen to be so productive, while the latter go off and write yet another PCRE-shim package for themselves and their fiends, leaving a fragmented regex-package space that continues to be dominated by the solid regex-base family, offering either a PCRE-shim backend ( regex-pcre ) or a native Posix backend ( regex-tdfa ).

Unfortunately, the historical Haskell API is, truly and deeply horrible to use. Haskell APIs have come a long way since the original Text.Regex was proposed in the pre-Hackage days before there was a coherent ecosystem beyond that shipped with GHC.

The most serious problem with the Text.Regex =~ operator is that the result type is overloaded, making it difficult to understand what is going on. Here is the type of its match operator from one of the back ends, as it appears in Hackage:

(=~) :: ( RegexMaker Regex CompOption ExecOption source, RegexContext Regex source1 target) => source1 -> source -> target

The problems are several:

the signature is complicated (Bryan has to gloss over it at the start of his tutorial);

the text, the RE and the result are all overloaded making it unsafe (is it really using the types I assume it is) and awkward to use in the REPL (you can’t just look at a match, for example);

it is really not at all easy to work out how the result type works — the only good source I could find was the excellent haskell-regex-examples where every result type is enumerated with brief examples.

there is no help with text replacement, the user having to write their own text replacement functions;

the standard operator provides no means for controlling case sensitivity and multi-line modes, and each back end uses its own configuration types.

In addition there are some lost opportunities, like the lack of compile-time checking of the validity of regular expressions.

Regex aims to resolve these issues by building a toolkit on top of the excellent foundations provided by regex-base , regex-tdfa and regex-pcre , preserving all of the goodness accumulated by nearly nine years of development and testing, while providing a well-documented high-level API that more of the things that are generally expected from a regex API (plus a couple more things besides).

Introducing regex

The regex package provides the following toolkit for the regex-tdfa and regex-pcre packages:

text-replacement operations with named captures;

special datatypes for matches and captures;

compile-time checking of RE syntax;

a unified means of controlling case-sensitivity and multi-line options;

high-level AWK-like tools for building text processing apps;

the option of using match operators with reduced polymorphism on the text and/or result types;

regular expression macros including: a number of useful RE macros; a test bench for testing and documenting new macro environments;

built-in support for the TDFA and PCRE backends;

comprehensive documentation and copious examples.

The following sections will illustrate these points with example uses of the package.

Simple regex REs

Regular expressions check the regular expressions at compile time using using [re| … |] quasi-quotes so quasi-quotes will need to be enabled, which you can do at the top of your program.

{-# LANGUAGE QuasiQuotes #-}

Next you will need to import one of the API modules. The module you import will depend on

the backend you want to use: PCRE for Perl flavoured REs and

for Perl flavoured REs and TDFA for Posix REs; and the kind of text you want to match — you can choose one text type and get operators that match that type only or import overloaded operators that will work with any text type that your chosen back end supports.

The tutorial goes into more detail about these options. For this article we will use the TDFA backend with the classic (but low-performance) Haskell String type.

import Text.RE.TDFA.String

This gets us two regex matching operators with type:

(*=~) :: String -> Regex -> Matches String (?=~) :: String -> Regex -> Match String

The first operator (*=~) looks for all matches of the RE in the String . The second (?=~) looks for the first match only. We can combine these operators on functions over Matches and Match (see the API documentation, Text.RE , for details) to find out useful things on the match(s). For example,

matched $ "2016-01-09 2015-12-5 2015-10-05" ?=~ [re|[0-9]{4}-[0-9]{2}-[0-9]{2}|]

will yield True indicating that a match was found, while

countMatches $ "2016-01-09 2015-12-5 2015-10-05" *=~ [re|[0-9]{4}-[0-9]{2}-[0-9]{2}|]

yields 2.

Note that the regular expressions are enclosed in [re| … |] quasi-quotes, so:

the regular expression gets checked at Haskell compile time, guaranteeing that the RE will be compiled successfully by the back end when it is used at run time — expressions like [re|*|] will not get passed the type checker;

unlike regular expressions in strings you can use characters like ‘' and’“’.

Options

To specify case-sensitivity and multi-line options, substitute re as appropriate, so,

countMatches $ "0a

bb

Fe

A5" *=~ [reBlockInsensitive|[0-9a-f]{2}$|]

yields 1 .

Text Replacement

A function to convert ISO format dates into a UK-format date could be written thus:

uk_dates :: String -> String uk_dates src = replaceAll "${d}/${m}/${y}" $ src *=~ [re|${y}([0-9]{4})-${m}([0-9]{2})-${d}([0-9]{2})|]

uk_dates "2016-01-09 2015-12-5 2015-10-05"

yields "09/01/2016 2015-12-5 05/10/2015" .

You can also apply functions to the captures to transform them: see the Text.RE.Replace for details and the tutorial for details.

One of the big surprises of the last few months working with regex is how useful it is to have a simple sed-like framework for processing files.

This little filter has proven to be mighty useful in the toolkit that I use to maintain regex itself.

include :: LBS.ByteString -> IO LBS.ByteString include = sed' $ Select [ (,) [re|^%include ${file}(@{%string})$|] $ Function TOP incl , (,) [re|^.*$|] $ Function TOP $ \_ _ _ _ -> return Nothing ] where incl _ mtch _ _ = Just <$> LBS.readFile (prs_s $ mtch !$$ [cp|file|]) prs_s = maybe (error "include" ) T.unpack . parseString

It just looks for lines in the input text that look like this:

% include "foo.txt"

and replaces the line with the contents of the named file in the output text.

Note the use of the @{%string} macro in the regular expression, which matches the string argument, and the companion parseString function for converting the string matched by the regular expression and returning the string it quotes. Having such a library of frequently used patterns with associated parsers is dead handy. (See macros.regex.uk for the macro table documentation.)

You can also use the regex test bench to build your own macro environments — see the NGINX log processor example for an extended example of this technique.

Tutorial and Examples

I thought I would have to work to provide better examples than the typical foo-bar examples and so wrote the NGINX log processor example which is great because it was responsible for the regex macros and testbench, and remains the best example of developing regular expressions at scale. But I need not have worries because once I started breaking my old habits of avoiding REs in the toolkit I use for maintaining the package, the floodgates opened. Check out the tutorial and examples for an overview.

Conclusion

The effect of having this toolkit available for the website production tools has been a complete surprise — I simply had no idea of the importance of this package when I started work on it. Regular expressions are important. Good library interfaces are important, as we have seen from the high quality packages that have added to the Haskell eco system in recent years.

The great thing about a strongly-typed language is that it encourages iterative improvement of library interfaces, which could perhaps be a strategic advantage for the Haskell eco system in the long haul.