THE  NEVER  ENDING  FULLPOS  STORY
Last update  : 2001-04-20 for  cycles CY24T1 / AL15

Remark :
    *     :  easy
   **    :  not so difficult
  ***   : rather difficult
 ****  : difficult


  The remaining bugs :

Aladin LSPLIT=.TRUE. : ****
Fullpos Aladin, and more generally Aladin is not working with LSPLIT=.TRUE. (to be linked with the namelist option TCDIS). Making the option work could improve the distribution balance.



    Innovations :
Ozone -associated constant fields :**
A agreement is still needed before starting any work on the Ozone-associated constant fields.


    My strongest wishes :

A complete parallelization of WRHFP :**
This concerns the outputs of post-processing : when there are more than one file to write out, we should be able to use as many processors as there are output files. Actually this is the "natural" A-level parallelization of this subroutine, while the distribution of the fields packing is rather a "B-level" parallelization.

GPRT, GPRTAD, GPRTTL :*
I have modularized the direct model code by writing GPRT ... but I should have modified the TL and AD model as well.

NHTYP=1, NFPHTYP=1 to be removed :*
The reduction of the reduced grid according to the old-fashioned formula should be removed from the code (Arpege/Ifs only)

SUHOW1, SUEHOW1, SUHOW2, SUEHOW2, SUHOWLSM, SUEHOWLSM :***
Quite a mess : there is obviously some duplications of code between Arpege and Aladin ; worse to my eyes : since this part of code is scientific, there should be no aspect of memory distribution (the latitudes and longitudes addresses should be the global ones here). Furthermore, there is some duplication of code with the management of the horizontal distribution (see SUPROCFP).

FULLPOS TL : ***
The tangent linear of Fullpos is an interesting project to save resources in the incremental variational analyses. Fort the time being, the  configuration 927 is run twice to create an innovation vector at high resolution. If we could use Fullpos TL on a file of increments, we would save CPU time during the analyses. Luckily, the tangent linear vertical operators already exist. Another solution under investigation is to use and improve the so-called "bogussing" configuration (also called 927e)  : we could replace the preliminary call to fullpos (to create the grid point background file) by a direct use of the spectral file after the spectral fit of the horizontally interpolated increments. We could also merge the two "horizontal parts" by a single "TL" one.
Remark : this could also serve a surface blending process on analysis increments (ask Dominique Giard for more information)

Speeding up the conf. 927 : ***
After the horizontal interpolations, the fields are spectrally fitted and re-written on an intermediate file that
will be the starting file for the vertical interpolations. To achieve that, all the processors must wait for one processor that will write this intermediate file. This is slowing down the execution. To get rid of this problem, we should write the spectral fields (after the horizontal interpolations and the spectral fit) directly on SPA2/SPA3 (do not forget SUOROG) so that we will spend much less time on the synchronization barrier, and we would not call SUSPECA afterwards.
Notice : having an external package for spectral transforms is suggesting other innovations.

Horizontal balance of Fullpos in DM : ****
For the time being, each processor interpolates (on the horizontal) the output points located in its own area. As a consequence, a processor that does not belong any output point will be more or less idle during the horizontal interpolations part. To circumvent this unbalance (penalizing the distribution of configurations like e927), we could try to distribute the output more equally and enlarge the buffers width of the processors who don't belong any output point. So these processors would have points to interpolate, but these interpolations would be fully located in the halos.

Horizontal interpolations descriptors:***
The land sea mask descriptors for the horizontal interpolations and the way how each field would be interpolated (quadratic interpolations or bilinear interpolations or "nearest point" should be available through namelist. Before that a little cleanup of the interpolations control (cf NFPWIDE, IFPWIDE) must be done !

"Level 4" cleaning :****
The code to control the set-up of the dynamic fields is complex ; recents modifications have been done (using F90 derived types) so that adding new fields is now quite easy. Anyway there are still simplifications to do (I don't know what yet. probably something based on an option in order to make a dummy call to DYNFPOS in the set-up + a simplification in the users' interface arrays)

Fields requests management :***
The management of the fields requested is more and more complex ; it should now be separated from SUFPC and put in a specific set-up subroutine (called before SUDIM). Two kinds of requests could be accepted : MF-style or ECMWF-style. Moreover, the
fields control deep in the code should be done by numeric rather than length variable character strings. (This idea was already proposed a long time ago by Mats Hamrud ; it is now working for dynamic fields, not yet for physical fields/fluxes).

HPOS : ***
HPOS is undoubtedly the longest subroutine of Fullpos. It should be split and properly rewritten to avoid duplications of code.
Furthermore, it should be merged with FPFILLB (equivalent of HPOS for the biperiodicisation). Last but not least, the SM aspect
should not appear in the DM code (MGETBUF etc.) : that will be good to prepare the incoming of the next computers generation (where multitasking could occur out of the communications part, i.e. on the core of the fields).

APACHE : **
I wish we could re-write APACHE in a modular way : as we have vertical operators PP*, we could have the corresponding operators for the terrain following vertical levels : APP*, using basically the subroutines PP*. In addition we could have a preparatory subroutine to set-up for the vertical interpolations (note that the same could apply to POS !). So the use of this new  "APACHE package" would be much more flexible.
Actually I started this work already : the subroutine veine.F90 as been replaced by a set of 3 subroutines : fpps.F90 (to compute pressure on a terrain-following surface) + ppleta.F90 (to compute pressure on output eta levels) + fpview.F90 (to compute weights for vertical interpolations - new subroutine).

FTINV memory over cost in DM: **
Investigations on the memory cost showed that the crucial part for Fullpos-ARPEGE is the inverse Fourier transform in DM, because of the
spectral fit on each "derivative" post-processed field for each subdomain. This is the first piece of code to modify if we want to save memory in Fullpos-ARPEGE DM. How to do it is a bit problematic : the solution may be to perform the inverse Fourier transforms on chunks of fields
in case of post-processing configuration (CDCONF(3) = 'P'). To be revisited with the external transforms package.

HPOS memory over cost : **
To post process fields on height or eta levels (out of conf. 927), one must first interpolate horizontally the model primitive variables. This is done through a call to VPOS('M'), then a call to HPOS where the vertical interpolations on the model primitive variables occur just after the horizontal interpolations. This is expensive in memory because one has to allocate simultaneously three buffers (instead of two) : GT0BUF (input to VPOS), AFPBUF (output of HPOS), and GAUXBUF (output of VPOS and input to HPOS) here containing all the model dynamic fields as grid points. In fact, in the DM code, GT0BUF is allocated/deallocated at each call to STEPO. But nevertheless, GAUXBUF needs to be overdimensionned to contain all the model dynamic fields in grid points. To get rid of this over cost, we could merge the call to VPOS('M') with the call to HPOS, by creating a new subroutine MPOS, so that we won't need GAUXBUF here.

Communications : **
According to the Fujitsu-DM-expert Vijay SaravaneMPE_PROBE is useless in the communications subroutines written for
the post-processing. So he recommend us to remove them (DISGRIDFP, DIWRGRFP, TRWVTOF). Furthermore, DISGRIDFP should be used in the set-up for distributing the climatology, instead of IRCVGPFFP and ISNDGPFFP which could be removed afterward.
 

Vertical aspects : **
We should limit the vertical interpolations to the true ones, i.e. : if CDCONF(5)='S' and the vertical output coordinate is the same as the input one, then we should turn to CDCONF='M' in order not to go through the interpolations subroutines. Then we can remove PPLETA. Finally when there is no horizontal interpolations, in POS we should always enter APACHE if heights levels are requested. To help for the first point, we could create a logical variable LFPOSVER, by analogy with LFPOSHOR.
We should also find a proper way to set-up the fields and level lists in the case of 927 for instance. it could be a namelist key telling : default request is al the content of the input file ; or a key specifying that fields are all asked on all eta levels, etc. ... to be properly analysed first. We can create a specific namelist to setup the vertical hybrid levels (NAMFPV by analogy with NAMVV1 ?)



    The forgotten topics :
 

Aladin option LMAP=.FALSE. for horizontal interpolations : **
For the time being, horizontal  interpolations are not possible within Aladin when LMAP=.FALSE. Enabling this implies a more flexible set-up where one could choose the corners of the domain through the X and Y coordinates. A cleaning of the duality LMAP/RCODIL in the code would be helpful before. For Taekwondo adepts only.

Some derivatives not yet post-processable in the semi-lagrangian model :*
Yes, some fields are not yet post-processable within the semi-lagrangian model ...but nobody noticed that ! It must be just because these fields are a bit exotic. This is due to the fact that the eulerian model requires some derivatives, but not the SL model. The day someone claims ...

Post-processing of upper air fluxes : ****
Just missing ! this will probably require a preliminary tremendous cleaning of the model fluxes management ! Who wants to start ?
Then the post-processing of grid point fields should be split in 3 independent calls to SCAN2H/WRHFP ("P", "C" or "X" : the
letters are already booked for HPOS !!!)

Vertical velocity, divergence, vorticity ... as grid point fields on eta or height levels : ***
Regularly asked. Would be possible once the configuration e927 works in "latlon" mode.

SPBFP descriptors and management :***
SPBFP is the spectral array containing the "derivatives" fields on the homogeneous spectral geometry (one for each horizontal subdomain). Its management has been clumsy since its creation.

927 latlon : **
Such configuration should be allowed. At least it was possible to run Fullpos from a spectral latlon file for COMPARE 2 needs. The problem is : how to manage the extension zone ?

CFPFMT='MODEL': **
The analysis of the use of this variable through the code would reveal how unclear things are in Fullpos !! Investigations and cleanups would serve the quality of the code.

Generalization of the output formats : **
By allowing  several "LELAM" grids at the same time (and why not several gaussian grids at the same time ?). Note that we could
also merge the different outputs, but that may be more complicated.

And maybe ...***
- get rid of the control file ncf927
- no recurrent call of CNT0 in 927
- enable full bound checkings by creating an extra-field in gp buffer where to all the "undefined" pointers will point (adress = 0 in a buffer sized  0:n)



... Something forgotten ? Ask ryad.elkhatib@meteo.fr