Optimisation of the first principle code Octopus for massive parallel architecturesapplication to light harvesting complexes

  1. Alberdi-Rodríguez, Joséba
Dirigida por:
  1. Angel Rubio Secades Director/a
  2. Javier Muguerza Rivero Director/a

Universidad de defensa: Universidad del País Vasco - Euskal Herriko Unibertsitatea

Fecha de defensa: 04 de junio de 2015

Tribunal:
  1. José Ángel Gregorio Monasterio Presidente/a
  2. José Javier López Pestaña Secretario/a
  3. Fernando Nogueira Vocal
  4. José Miguel Alonso Vocal
  5. Silvana Botti Vocal
Departamento:
  1. Polímeros y Materiales Avanzados: Física, Química y Teconología

Tipo: Tesis

Teseo: 119485 DIALNET lock_openADDI editor

Resumen

Computer simulation has become a powerful technique for assisting scientists in developing novel insights into the basic phenomena underlying a wide variety of complex physical systems. The work reported in this thesis is concerned with the use of massively parallel computers to simulate the fundamental features at the electronic structure level that control the initial stages of harvesting and transfer of solar energy in green plants which initiate the photosynthetic process.Currently available supercomputer facilities offer the possibility of using hundred of thousands of computing cores. However, obtaining a linear speed-up from HPC systems is far from trivial. Thus, great efforts must be devoted to understand the nature of the scientific code, the methods of parallel execution, data communication requirements in multi-process calculations, the efficient use of available memory, etc. This thesis deals with all of these themes, with a clear objective in mind: the electronic structure simulation of complete macro-molecular complexes, namely the Light Harvesting Complex II, with the aim of understanding its physical behaviour.In order to simulate this complex, we have used (with the assistance of the of the PRACE consortium) the some of the most powerful supercomputers in Europe to run Octopus, a scientific software package for Density Functional Theory and Time-Dependent Density Functional Theory calculations. Results obtained with Octopus have been analysed in depth in order to identify the main obstacles to optimal scaling using thousands of cores. Many problems have emerged, mainly the poor performance of the Poisson solver, high memory requirements, the transfer of high quantities of complex data structures among processes, and so on. Finally, all of these problems have been overcome, and the new version reaches a very high performance in massively parallel systems. Tests run efficiently up to 128K processors and thus we have been able to complete the largest TDDFT calculations performed to date. At the conclusion of this work it has been possible to study the Light Harvesting Complex II as originally envisioned.