- Chair of Compiler Construction
- Chair of Emerging Electronic Technologies
- Chair of Knowledge-Based Systems
- Chair of Molecular Functional Materials
- Chair of Network Dynamics
- Chair of Organic Devices
- Chair of Processor Design
Clément Fournier |
||
![]() |
Phone Visitor's Address |
clement.fournier@tu-dresden.de +49 (0)351 463 xxxxx Helmholtzstrasse 18, 3rd floor, BAR III55 01069 Dresden |
Clément Fournier received his engineering degree in Software Engineering from INSA Rennes and his Computer Science Diplom from TU Dresden in July 2022. In 2021, he wrote his diploma thesis on the Rust implementation of the Lingua Franca coordination language. In 2022, he did an internship at Xilinx (then AMD), working on an MLIR-based neural network compiler.
In September 2022, he joined the chair as a research assistant. He works on high-level compiler frameworks (like MLIR) and programming models like Lingua Franca. He focuses on the compilation of neural network and related tensor algebra programs onto recent non-von Neumann platforms, like AMD's AI Engines.
2024-09 I am currently working on a dataflow representation for compilation of neural networks. Please drop me an email if you are interested in compiler implementation (in the MLIR C++ framework), we can talk about an appropriately sized topic.
2026
- Hector A. Gonzalez, Javier Acevedo, Khaleelulla K. Nazeer, Clément Fournier, Abdul Rehman Aslam, Jiaxin Huang, Matthias A. Lohrmann, Robert A. Tietze, Christian Eichhorn, Stefan Gumhold, Sami Haddadin, Hamid Sadeghian, Reinhard Heckel, Frank H.P. Fitzek, Jeronimo Castrillon, Christian Mayr, "Artificial intelligence in 6G ecosystem", Chapter in 6G-life (Frank H.P. Fitzek and Holger Boche and Wolfgang Kellerer and Patrick Seeling), Academic Press, pp. 205–227, Feb 2026. [doi] [Bibtex & Downloads]
Artificial intelligence in 6G ecosystem
×Reference
Hector A. Gonzalez, Javier Acevedo, Khaleelulla K. Nazeer, Clément Fournier, Abdul Rehman Aslam, Jiaxin Huang, Matthias A. Lohrmann, Robert A. Tietze, Christian Eichhorn, Stefan Gumhold, Sami Haddadin, Hamid Sadeghian, Reinhard Heckel, Frank H.P. Fitzek, Jeronimo Castrillon, Christian Mayr, "Artificial intelligence in 6G ecosystem", Chapter in 6G-life (Frank H.P. Fitzek and Holger Boche and Wolfgang Kellerer and Patrick Seeling), Academic Press, pp. 205–227, Feb 2026. [doi]
Abstract
The future technical standard of sixth-generation (6G) technology for wireless communications has accelerated the arrival of interconnected autonomous systems and other sensing devices in a wide range of industrial zones, such as smart factories, smart farms, and cognitive cities, among others. The imminent digitalization of these ecosystems has created highly dynamic environments that demand real-time decisions, making it difficult for humans to keep up with all their details. These dynamic scenarios require planning and execution that is more precise and faster than the speed at which data is acquired. The use of Artificial Intelligence (AI) offers high potential to enable the monitoring and assessment of multi-modal sensor data at a superhuman level, leading to faster decisions with better precision, which reduces undesired automated behavior, while enabling new forms of interaction. This chapter describes techniques, software frameworks, compilation flows, and hardware infrastructure for achieving large-scale, energy-efficient, trustworthy, real-time, and distributed AI in the newly developed era of 6G ecosystems, which produce vast amounts of data. The chapter also describes an economic perspective on the challenges in achieving this vision.
Bibtex
@InCollection{gonzalez_6GBook26,
author = {Hector A. Gonzalez and Javier Acevedo and Khaleelulla K. Nazeer and Clément Fournier and Abdul Rehman Aslam and Jiaxin Huang and Matthias A. Lohrmann and Robert A. Tietze and Christian Eichhorn and Stefan Gumhold and Sami Haddadin and Hamid Sadeghian and Reinhard Heckel and Frank H.P. Fitzek and Jeronimo Castrillon and Christian Mayr},
booktitle = {6G-life},
title = {Artificial intelligence in 6G ecosystem},
doi = {https://doi.org/10.1016/B978-0-44-327410-7.00024-7},
editor = {Frank H.P. Fitzek and Holger Boche and Wolfgang Kellerer and Patrick Seeling},
isbn = {978-0-443-27410-7},
pages = {205--227},
publisher = {Academic Press},
url = {https://www.sciencedirect.com/science/article/pii/B9780443274107000247},
abstract = {The future technical standard of sixth-generation (6G) technology for wireless communications has accelerated the arrival of interconnected autonomous systems and other sensing devices in a wide range of industrial zones, such as smart factories, smart farms, and cognitive cities, among others. The imminent digitalization of these ecosystems has created highly dynamic environments that demand real-time decisions, making it difficult for humans to keep up with all their details. These dynamic scenarios require planning and execution that is more precise and faster than the speed at which data is acquired. The use of Artificial Intelligence (AI) offers high potential to enable the monitoring and assessment of multi-modal sensor data at a superhuman level, leading to faster decisions with better precision, which reduces undesired automated behavior, while enabling new forms of interaction. This chapter describes techniques, software frameworks, compilation flows, and hardware infrastructure for achieving large-scale, energy-efficient, trustworthy, real-time, and distributed AI in the newly developed era of 6G ecosystems, which produce vast amounts of data. The chapter also describes an economic perspective on the challenges in achieving this vision.},
month = feb,
year = {2026},
}Downloads
No Downloads available for this publication
Permalink
2025
- Asif Ali Khan, Hamid Farzaneh, Karl F. A. Friebel, Clement Fournier, Lorenzo Chelini, Jeronimo Castrillon, "CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms", Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS'25), Volume 4, Association for Computing Machinery, pp. 31–46, Mar 2025. [doi] [Bibtex & Downloads]
CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms
×Reference
Asif Ali Khan, Hamid Farzaneh, Karl F. A. Friebel, Clement Fournier, Lorenzo Chelini, Jeronimo Castrillon, "CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms", Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS'25), Volume 4, Association for Computing Machinery, pp. 31–46, Mar 2025. [doi]
Abstract
The rise of data-intensive applications exposed the limitations of conventional processor-centric von-Neumann architectures that struggle to meet the off-chip memory bandwidth demand. Therefore, recent innovations in computer architecture advocate compute-in-memory (CIM) and compute-near-memory (CNM), non-von-Neumann paradigms achieving orders-of-magnitude improvements in performance and energy consumption. Despite significant technological breakthroughs in the last few years, the programmability of these systems is still a serious challenge. Their programming models are too low-level and specific to particular system implementations. Since such future architectures are predicted to be highly heterogeneous, developing novel compiler abstractions and frameworks becomes necessary. To this end, we present CINM (Cinnamon), a first end-to-end compilation flow that leverages the hierarchical abstractions to generalize over different CIM and CNM devices and enable device-agnostic and device-aware optimizations. Cinnamon progressively lowers input programs and performs optimizations at each level in the lowering pipeline. To show its efficacy, we evaluate CINM on a set of benchmarks for a real CNM system (UPMEM) and the memristors-based CIM accelerators. We show that Cinnamon, supporting multiple hardware targets, generates high-performance code comparable to or better than state-of-the-art implementations.
Bibtex
@InProceedings{khan_asplos25,
author = {Khan, Asif Ali and Farzaneh, Hamid and Friebel, Karl F. A. and Fournier, Clement and Chelini, Lorenzo and Castrillon, Jeronimo},
booktitle = {Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS'25), Volume 4},
title = {CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms},
doi = {10.1145/3622781.3674189},
isbn = {9798400703911},
location = {Rotterdam, The Netherlands},
pages = {31--46},
publisher = {Association for Computing Machinery},
series = {ASPLOS '25},
url = {https://dl.acm.org/doi/pdf/10.1145/3622781.3674189},
abstract = {The rise of data-intensive applications exposed the limitations of conventional processor-centric von-Neumann architectures that struggle to meet the off-chip memory bandwidth demand. Therefore, recent innovations in computer architecture advocate compute-in-memory (CIM) and compute-near-memory (CNM), non-von-Neumann paradigms achieving orders-of-magnitude improvements in performance and energy consumption. Despite significant technological breakthroughs in the last few years, the programmability of these systems is still a serious challenge. Their programming models are too low-level and specific to particular system implementations. Since such future architectures are predicted to be highly heterogeneous, developing novel compiler abstractions and frameworks becomes necessary. To this end, we present CINM (Cinnamon), a first end-to-end compilation flow that leverages the hierarchical abstractions to generalize over different CIM and CNM devices and enable device-agnostic and device-aware optimizations. Cinnamon progressively lowers input programs and performs optimizations at each level in the lowering pipeline. To show its efficacy, we evaluate CINM on a set of benchmarks for a real CNM system (UPMEM) and the memristors-based CIM accelerators. We show that Cinnamon, supporting multiple hardware targets, generates high-performance code comparable to or better than state-of-the-art implementations.},
month = mar,
numpages = {16},
year = {2025},
}Downloads
2504_Khan_CINM_ASPLOS [PDF]
Permalink
2023
- Christian Menard, Marten Lohstroh, Soroush Bateni, Mathhew Chorlian, Arthur Deng, Peter Donovan, Clément Fournier, Shaokai Lin, Felix Suchert, Tassilo Tanneberger, Hokeun Kim, Jeronimo Castrillon, Edward A. Lee, "High-Performance Deterministic Concurrency using Lingua Franca", In ACM Transactions on Architecture and Code Optimization (TACO), Association for Computing Machinery, New York, NY, USA, Aug 2023. [doi] [Bibtex & Downloads]
High-Performance Deterministic Concurrency using Lingua Franca
×Reference
Christian Menard, Marten Lohstroh, Soroush Bateni, Mathhew Chorlian, Arthur Deng, Peter Donovan, Clément Fournier, Shaokai Lin, Felix Suchert, Tassilo Tanneberger, Hokeun Kim, Jeronimo Castrillon, Edward A. Lee, "High-Performance Deterministic Concurrency using Lingua Franca", In ACM Transactions on Architecture and Code Optimization (TACO), Association for Computing Machinery, New York, NY, USA, Aug 2023. [doi]
Abstract
Actor frameworks and similar reactive programming techniques are widely used for building concurrent systems. They promise to be efficient and scale well to a large number of cores or nodes in a distributed system. However, they also expose programmers to nondeterminism, which often makes implementations hard to understand, debug, and test. The recently proposed reactor model is a promising alternative that enables deterministic concurrency. In this paper, we present an efficient, parallel implementation of reactors and demonstrate that the determinacy of reactors does not imply a loss in performance. To show this, we evaluate Lingua Franca (LF), a reactor-oriented coordination language. LF equips mainstream programming languages with a deterministic concurrency model that automatically takes advantage of opportunities to exploit parallelism. Our implementation of the Savina benchmark suite demonstrates that, in terms of execution time, the runtime performance of LF programs even exceeds popular and highly optimized actor frameworks. We compare against Akka and CAF, which LF outperforms by 1.86x and 1.42x, respectively.
Bibtex
@Article{menard_taco23,
author = {Menard, Christian and Lohstroh, Marten and Bateni, Soroush and Chorlian, Mathhew and Deng, Arthur and Donovan, Peter and Fournier, Clément and Lin, Shaokai and Suchert, Felix and Tanneberger, Tassilo and Kim, Hokeun and Castrillon, Jeronimo and Lee, Edward A.},
title = {High-Performance Deterministic Concurrency using Lingua Franca},
doi = {10.1145/3617687},
issn = {1544-3566},
number = {4},
pages = {1--29},
url = {https://doi.org/10.1145/3617687},
volume = {20},
abstract = {Actor frameworks and similar reactive programming techniques are widely used for building concurrent systems. They promise to be efficient and scale well to a large number of cores or nodes in a distributed system. However, they also expose programmers to nondeterminism, which often makes implementations hard to understand, debug, and test. The recently proposed reactor model is a promising alternative that enables deterministic concurrency. In this paper, we present an efficient, parallel implementation of reactors and demonstrate that the determinacy of reactors does not imply a loss in performance. To show this, we evaluate Lingua Franca (LF), a reactor-oriented coordination language. LF equips mainstream programming languages with a deterministic concurrency model that automatically takes advantage of opportunities to exploit parallelism. Our implementation of the Savina benchmark suite demonstrates that, in terms of execution time, the runtime performance of LF programs even exceeds popular and highly optimized actor frameworks. We compare against Akka and CAF, which LF outperforms by 1.86x and 1.42x, respectively.},
address = {New York, NY, USA},
articleno = {48},
copyright = {Creative Commons Attribution 4.0 International},
journal = {ACM Transactions on Architecture and Code Optimization (TACO)},
month = aug,
numpages = {29},
publisher = {Association for Computing Machinery},
year = {2023},
}Downloads
2309_Menard_TACO [PDF]
Permalink
2021
- Clément Fournier, "A Rust Backend for Lingua Franca", Master's thesis, TU Dresden, Dec 2021. [Bibtex & Downloads]
A Rust Backend for Lingua Franca
×Reference
Clément Fournier, "A Rust Backend for Lingua Franca", Master's thesis, TU Dresden, Dec 2021.
Bibtex
@mastersthesis{Fournier-diploma21,
title={A Rust Backend for Lingua Franca},
author={Clément Fournier},
year={2021},
month=dec,
school={TU Dresden},
}Downloads
2112_Fournier_DA [PDF]
Permalink



