Page 22 - Krész, Miklós, and Andrej Brodnik (eds.). MATCOS-13. Proceedings of the 2013 Mini-Conference on Applied Theoretical Computer Science. Koper: University of Primorska Press, 2016.
P. 22
demonstrate the efficiency of the method through prac- User Algorithm
tical examples like a PID controller and a part of the MP3 writtenhinhHaskell
decoding algorithm. OperationSet
Compiler
For the method, we defined the following requirements: parameter

1. compile Haskell to VHDL code automatically Compiler

2. notify the user about the error details in case the code FOpMap Haskell-EOG
cannot be compiled compiler

3. make the compiler program modular, so that its parts HLShoptimization EOG
can be used separately
OpVhdlMap EOG-VHDL
4. ensure that pipeline optimization methods can be in- VhdlModules compiler
cluded if necessary
Externalhsoftware VHDLhcode
5. the operation set should be dynamically extendable,
without modifying the compiler program FPGAhsynthesizer

Among some functional languages (Lava [3], µF P [12], Ruby Hardware
[5], SASL [4]) that can use to generate HDL code, the most
similar project is the CλasH (CAES Language for Synchronous FPGAhconfiguration
Hardware) [2, 7], nevertheless it meets only the first two
requirements from the above list. However, the purpose of Figure 1: The structure of the compiler method
CλasH is different from the method introduced in this paper.
Although, CλasH is based on Haskell syntax, it is a hardware known homogeneous synchronous dataflow (HSDF) [8, 11]
description language, in contrast to our method where the graph, where the vertices are the operations and the edges
purpose is a language without hardware concepts. CλasH are dataflow links. In this representation, HLS optimization
also has some disadvantages compared with the method in stage can be included, such as the HLS system PIPE [1].
this paper. It has no pipeline optimization stage, and it can After the optional optimization, the EOG-VHDL compiler
not include external optimization methods, as it doesn’t use as backend part will produce the expected VHDL output.
dataflow graphs as intermediate representation. The opera- From VHDL, an external FPGA development software can
tion set is constant, thus new operations can not be added synthesize the FPGA bitstream, which can result in a real
dynamically. FPGA application.

The method and the compiler program introduced in this 2.1 Haskell-EOG compiler
paper meets Requirement 1, as it compiles the code auto-
matically. In our method the GHC frontend is used to pre- The frontend of our compiler takes the source code written
process the source code that includes the parser, lexer and in Haskell, and produces the EOG via the successive stages
the generation of the abstract syntax tree. Therefore, re- introduced in this section.
porting of the compiler errors (Requirement 2) is ensured
by GHC. Our method is based on modular structure: it is Stage 1. Produce the AST Core
divided into frontend and backend parts, and these parts The first steps are to parse and analyze the code, and pro-
consist of inner stages as it will be detailed later (Require- duce the abstract syntax tree (AST). These steps are made
ment 3). Between the frontend and backend, the interface by the GHC frontend, which produces the GHC Core [13]
is a dataflow graph, which is an intermediate representation tree. This Core tree mostly consist of the following type of
of the algorithm. Most of the pipeline optimization sys- nodes: Lit (constant literal), Var (using of variable), App
tems are based on dataflow graphs, therefore these stages (lambda application, a.k.a. function calling), Lam (lambda
can be included in our system (Requirement 4). An exter- abstraction, a.k.a. function body) and Case (branching). In
nal database is used as operation set, therefore expanding the further steps of the frontend pipeline these nodes have to
the set of operations dynamically is simple (Requirement 5). be converted to EOG nodes and edges by successive graph
transformations.
2. THE PROPOSED COMPILER
ARCHITECTURE Stage 2. Eliminate of the branching
The second step is to eliminate all of the branching (Case)
In this section the architecture of our Haskell to VHDL com- nodes from the Core. During the elimination every branch
piler system will be introduced. is to be converted to a special function calling (App) node,
which performs the branching as a black box. This step is a
The structure of the compiler method is shown in Figure so-called Core2Core transformation, as its input and output
1. As it can be seen, the structure is divided into frontend are Core trees with the same structure.
and backend parts. The first is the Haskell-EOG compiler,
which produces the elementary operation graph (EOG) [1] as
intermediate dataflow representation. EOG is a kind of well

m a t c o s -1 3 Proceedings of the 2013 Mini-Conference on Applied Theoretical Computer Science 22
Koper, Slovenia, 10-11 October
   17   18   19   20   21   22   23   24   25   26   27