This article describes the features in the programming language Haskell.
A simple example that is often used to demonstrate the syntax of functional languages is the factorial function for non-negative integers, shown in Haskell:
Or in one line:
This describes the factorial as a recursive function, with one terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of Haskell code is similar to standard mathematical notation in facility and syntax.
The first line of the factorial function describes the type of this function; while it is optional, it is considered to be good style to include it. It can be read as the function factorial (factorial) has type (::) from integer to integer (Integer -> Integer). That is, it takes an integer as an argument, and returns another integer. The type of a definition is inferred automatically if no type annotation is given.
The second line relies on pattern matching, an important feature of Haskell. Note that parameters of a function are not in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the third line is tried. This is the recursion, and executes the function again until the base case is reached.
Using the product function from the Prelude, a number of small functions analogous to C's standard library, and using the Haskell syntax for arithmetic sequences, the factorial function can be expressed in Haskell as follows:
Here [1..n] denotes the arithmetic sequence 1, 2, …, n in list form. Using the Prelude function enumFromTo, the expression [1..n] can be written as enumFromTo 1 n, allowing the factorial function to be expressed as
which, using the function composition operator (expressed as a dot in Haskell) to compose the product function with the curried enumeration function can be rewritten in point-free style:
In the Hugs interpreter, one often needs to define the function and use it on the same line separated by a where or let..in. For example, to test the above examples and see the output 120:
or
The GHCi interpreter doesn't have this restriction and function definitions can be entered on one line (with the let syntax without the in part), and referenced later.
In the Haskell source immediately below, :: can be read as "has type"; a -> b can be read as "is a function from a to b". (Thus the Haskell calc :: String -> [Float] can be read as "calc has type of a function from Strings to lists of Floats".)
In the second line calc = ... the equals sign can be read as "can be"; thus multiple lines with calc = ... can be read as multiple possible values for calc, depending on the circumstance detailed in each line.
A simple Reverse Polish notation calculator expressed with the higher-order function foldl whose argument f is defined in a where clause using pattern matching and the type class Read:
The empty list is the initial state, and f interprets one word at a time, either as a function name, taking two numbers from the head of the list and pushing the result back in, or parsing the word as a floating-point number and prepending it to the list.
The following definition produces the list of Fibonacci numbers in linear time:
The infinite list is produced by corecursion — the latter values of the list are computed on demand starting from the initial two items 0 and 1. This kind of a definition relies on lazy evaluation, an important feature of Haskell programming. For an example of how the evaluation evolves, the following illustrates the values of fibs and tail fibs after the computation of six items and shows how zipWith (+) has produced four items and proceeds to produce the next item:
The same function, written using Glasgow Haskell Compiler's parallel list comprehension syntax (GHC extensions must be enabled using a special command-line flag, here -XParallelListComp, or by starting the source file with {-# LANGUAGE ParallelListComp #-}):
or with regular list comprehensions:
or directly self-referencing:
With stateful generating function:
or with unfoldr:
or scanl:
Using data recursion with Haskell's predefined fixpoint combinator:
The factorial we saw previously can be written as a sequence of functions:
A remarkably concise function that returns the list of Hamming numbers in order:
Like the various fibs solutions displayed above, this uses corecursion to produce a list of numbers on demand, starting from the base case of 1 and building new items based on the preceding part of the list.
Here the function union is used as an operator by enclosing it in back-quotes. Its case clauses define how it merges two ascending lists into one ascending list without duplicate items, representing sets as ordered lists. Its companion function minus implements set difference:
It is possible to generate only the unique multiples, for more efficient operation. Since there are no duplicates, there's no need to remove them:
This uses the more efficient function merge which doesn't concern itself with the duplicates (also used in the following next function, mergesort ):
Each vertical bar ( | ) starts a guard clause with a guard expression before the = sign and the corresponding definition after it, that is evaluated if the guard is true.
Here is a bottom-up merge sort, defined using the higher-order function until:
The mathematical definition of primes can be translated pretty much word for word into Haskell:
This finds primes by trial division. Note that it is not optimized for efficiency and has very poor performance. Slightly faster (but still very slow) is this code by David Turner:
Much faster is the optimal trial division algorithm
or an unbounded sieve of Eratosthenes with postponed sieving in stages,
or the combined sieve implementation by Richard Bird,
or an even faster tree-like folding variant with nearly optimal (for a list-based code) time complexity and very low space complexity achieved through telescoping multistage recursive production of primes:
Working on arrays by segments between consecutive squares of primes, it's
The shortest possible code is probably nubBy (((>1) .) . gcd) [2..]. It is quite slow.
Haskell allows indentation to be used to indicate the beginning of a new declaration. For example, in a where clause:
The two equations for the nested function prod are aligned vertically, which allows the semi-colon separator to be omitted. In Haskell, indentation can be used in several syntactic constructs, including do, let, case, class, and instance.
The use of indentation to indicate program structure originates in Peter J. Landin's ISWIM language, where it was called the off-side rule. This was later adopted by Miranda, and Haskell adopted a similar (but rather more complex) version of Miranda's off-side rule, which is called "layout". Other languages to adopt whitespace character-sensitive syntax include Python and F#.
The use of layout in Haskell is optional. For example, the function product above can also be written:
The explicit open brace after the where keyword indicates that separate declarations will use explicit semi-colons, and the declaration-list will be terminated by an explicit closing brace. One reason for wanting support for explicit delimiters is that it makes automatic generation of Haskell source code easier.
Haskell's layout rule has been criticised for its complexity. In particular, the definition states that if the parser encounters a parse error during processing of a layout section, then it should try inserting a close brace (the "parse error" rule). Implementing this rule in a traditional parsing and lexical analysis combination requires two-way cooperation between the parser and lexical analyser, whereas in most languages, these two phases can be considered independently.
Applying a function f to a value x is expressed as simply f x.
Haskell distinguishes function calls from infix operators syntactically, but not semantically. Function names which are composed of punctuation characters can be used as operators, as can other function names if surrounded with backticks; and operators can be used in prefix notation if surrounded with parentheses.
This example shows the ways that functions can be called:
Functions which are defined as taking several parameters can always be partially applied. Binary operators can be partially applied using section notation:
See List comprehension#Overview for the Haskell example.
Pattern matching is used to match on the different constructors of algebraic data types. Here are some functions, each using pattern matching on each of the types below:
Using the above functions, along with the map function, we can apply them to each element of a list, to see their results:
Tuples in haskell can be used to hold a fixed number of elements. They are used to group pieces of data of differing types:
Tuples are commonly used in the zip* functions to place adjacent elements in separate lists together in tuples (zip4 to zip7 are provided in the Data.List module):
In the GHC compiler, tuples are defined with sizes from 2 elements up to 62 elements.
In the § More complex examples section above, calc is used in two senses, showing that there is a Haskell type class namespace and also a namespace for values:
Algebraic data types are used extensively in Haskell. Some examples of these are the built in list, Maybe and Either types:
Users of the language can also define their own abstract data types. An example of an ADT used to represent a person's name, sex and age might look like:
The ST monad allows writing imperative programming algorithms in Haskell, using mutable variables (STRefs) and mutable arrays (STArrays and STUArrays). The advantage of the ST monad is that it allows writing code that has internal side effects, such as destructively updating mutable variables and arrays, while containing these effects inside the monad. The result of this is that functions written using the ST monad appear pure to the rest of the program. This allows using imperative code where it may be impractical to write functional code, while still keeping all the safety that pure code provides.
Here is an example program (taken from the Haskell wiki page on the ST monad) that takes a list of numbers, and sums them, using a mutable variable:
The STM monad is an implementation of Software Transactional Memory in Haskell. It is implemented in the GHC compiler, and allows for mutable variables to be modified in transactions.
As Haskell is a pure functional language, functions cannot have side effects. Being non-strict, it also does not have a well-defined evaluation order. This is a challenge for real programs, which among other things need to interact with an environment. Haskell solves this with monadic types that leverage the type system to ensure the proper sequencing of imperative constructs. The typical example is input/output (I/O), but monads are useful for many other purposes, including mutable state, concurrency and transactional memory, exception handling, and error propagation.
Haskell provides a special syntax for monadic expressions, so that side-effecting programs can be written in a style similar to current imperative programming languages; no knowledge of the mathematics behind monadic I/O is required for this. The following program reads a name from the command line and outputs a greeting message:
The do-notation eases working with monads. This do-expression is equivalent to, but (arguably) easier to write and understand than, the de-sugared version employing the monadic operators directly:
The Haskell language definition includes neither concurrency nor parallelism, although GHC supports both.
Concurrent Haskell is an extension to Haskell that supports threads and synchronization. GHC's implementation of Concurrent Haskell is based on multiplexing lightweight Haskell threads onto a few heavyweight operating system (OS) threads, so that Concurrent Haskell programs run in parallel via symmetric multiprocessing. The runtime can support millions of simultaneous threads.
The GHC implementation employs a dynamic pool of OS threads, allowing a Haskell thread to make a blocking system call without blocking other running Haskell threads. Hence the lightweight Haskell threads have the characteristics of heavyweight OS threads, and a programmer can be unaware of the implementation details.
Recently,[when?] Concurrent Haskell has been extended with support for software transactional memory (STM), which is a concurrency abstraction in which compound operations on shared data are performed atomically, as transactions. GHC's STM implementation is the only STM implementation to date to provide a static compile-time guarantee preventing non-transactional operations from being performed within a transaction. The Haskell STM library also provides two operations not found in other STMs: retry and orElse, which together allow blocking operations to be defined in a modular and composable fashion.