Running ECLIS on a PC or server at CNRM

From V6.7, ECLIS runs on CNRM’s PC, using hendrix for archiving. This was yet tested only with Aladin

Article mis en ligne le 6 janvier 2016
dernière modification le 4 avril 2016

par senesi

For model development, it may make sense to use CNRM Linux PCs and servers. From V6.7, ECLIS can be used to run models in that environment (see also Running ECLIS on cluster Aneto ). It then uses MF archive machine ’hendrix’. It can get restarts and/or namelist files from another machine (as e.g. beaufix). It may require to have local copies for some forcing or fixed files. This was tested only with Aladin, yet.

(This article assumes you are already familiar with running Eclis)

Arpege and Aladin can run on CNRM’s PCs and server (see EAC). Surfex, Nemo and XIOS too (see respective support teams). This is a relevant way to develop models IT or science aspects without suffering from HPC shared access issues, when such devlopment do not require full-size and long runs. On most CNRM PCs, you may dedicate 3 of the 4 available cores, which allows to run a one-month Aladin configuration like MAD150L91 (grid=50*60*L91) in one hour of elapsed time

Eclis runs almost transparently on CNRM’s PCs and servers.

What you have to do is :

 ask CTI to have the ’atd’ deamon running on the PC or server (because Eclis uses the ’batch command to launch the various job steps)
 get the model and know how to compile it ; this may imply to ask CTI to have a GMAP set of libraries and tools synchronized from a GMAP server to directory /home/common/sync on your PC
 make sure that you can exchange files with hendrix using ftput and ftget ; this may imply adding an entry in your .ftuas file for machine hendrix.meteo.fr using ftmotpasse
 have a look at the example param file param_MAD120l (attached). In addition, you must also set LON_ANETO=0 as an Eclis parameter, if you do not want to run on ANeto cluster (see also Running ECLIS on cluster Aneto )
 ensure that you have the necessary datafiles copied on the PCs or CNRM’s Lustre, if they are not yet available as installed datafiles (see below) ; however, those forcing files which are usually sought on hendrix by Eclis (for beaufix runs) do not need to be copied locally

Useful details are :

 Eclis is installed at /cnrm/aster/data1/UTILS/eclis
 a few utilities are installed at /cnrm/aster/data3/aster/senesi/arpege/library  : updcli_gen_7, updclig6a.ald_sfx8 ; some more can be installed on request
 a few datafiles are installed at /cnrm/aster/data3/aster/senesi/NO_SAVE , in subdirs : bcond, SURFEX_BCOND, clim
 there is less reliability in file transfers between hendrix and a PC than between hendrix and beaufix ; for instance, at experiment install stage, you may get error messages, and then need to re-install
 the location of ’SCRATCH’ (base for run dirs and file-transfer dirs) is /tmp/ECLIS ; /tmp is erased at machine boot, but you may wish to perform some house-keeping of Eclis simulation files in between
 when you simulation is running, you may check it using command ’atq’ or ’ps -ef | grep step’
 endian-ness : a working Arpege configuration is the one using restart/ecoclimap files generated on Beaufix and setting export GFORTRAN_CONVERT_UNIT=’swap’ (see example param file)
 when setting NPROC, do not request all available cores, because, given the way MPI is used, you must let at least one free core for the system (and for your other processes)


Documents
atmospheric namelist for MAD150l 4.5 ko / Zip

param_MAD150l 1.6 ko / Zip

Surfex namelist for MAD150l 1.5 ko / Zip

Dans la même rubrique

Breaks in Eclis upward compatibility
le 19 février 2019
par senesi
Eclis and Xios
le 19 février 2019
par senesi
Running ECLIS on cluster Aneto
le 1er février 2016
par senesi
Présentation d’ECLIS
le 15 octobre 2013
par senesi
ECLIS sur Beaufix
le 20 août 2013
par senesi