After you installed a binary package or compiled and installed Alpine3D, you can run your first simulation. We highly recommend that you use the following structure: first, create a directory for your simulation, for example "Dischma". Then, create the following sub-directories:
Once you have compiled Alpine3D, do a make
deploy
to copy all the necessary files in the "bin" sub directory and then copy this directory as the "bin" sub directory of your simulation. Make sure that your "run.sh" script points to this sub directory (this would be "../bin" as "PROG_ROOTDIR"). Edit this script to set it up with the proper start and end dates, the modules that you want to enable and the proper configuration for a sequential or parallel run.
This is the simplest way of running an Alpine3D simulation: it runs on one core on one computer. The drawback is that if the simulation is quite large, it might require a very long time to run or even not have enough memory (RAM) to run once there is snow in the simulated domain. In order to run a sequential simulation, set "PARALLEL=N"
in the run.sh script. Then run the following command in a terminal (this can be a remote terminal such as ssh) on the computer where the simulation should run:
nohup means that if you close the terminal the simulation will keep going; & means that the terminal prompt can acept other commands after you've submitted this one. In order to monitor what is going on with your simulation, simply run something such as (-f means that it keeps updating with the new content in this file. Replace it with something such as -500 to show the last 500 lines of this file):
If you need to terminate the simulation, first find out its Process ID (PID) by doing
Then kill the process
This is the easiest way to run a parallel simulation because it does not require any specific software, only a compiler that supports OpenMP (see also its wikipedia page). Such compilers are for example gcc, clang, etc. The limitations on the memory still remain (ie a simulation requiring lots of memory will still only have access to the local computer's memory) but the run time will be roughtly divided by the number of available cores that are given to the simulation. In order to run such a simulation, please compile Alpine3D with the OpenMP option set to ON in cmake. Then in the simulation's run.sh file, set "PARALLEL=OPENMP"
as well as the number of cores you want to us as "NCORES="
. Then run the simulation as laid out in the previous section.
This is the most powerful way of running a simulation: the load is distributed among distinct computing nodes, therefore reducing the amount of memory that must be available on each node. For very large simulations, this might be the only way to proceed. This is achieved by relying on MPI to exchange data between the nodes and distribute the workload. In order to run such a simulation, please compile Alpine3D with the MPI option set to ON in cmake. Then in the simulation's run.sh file, set "PARALLEL=MPI"
as well as the number of processors/cores you want to us as "NPROC="
and a machine file. This machine file contains the list of machines to use for the simulation as well as how many processors/cores to use. For example, such as file could be:
Then run the simulation as laid out in the previous section.
If your computing infrastructure relies on Sun/Oracle Grid Engine (SGE) (for example on a computing cluster), you need to do things differently. First, the job management directives/options must be provided, either on the command line or in the run.sh script. These lines rely on special comments, starting with "#" followed by "$":
The last line specifies the computing profile that should be used. Since the job manager will allocate the ressources, there is no need to provide either NCORES or NPROC. The machine file (for MPI) is also not used. Then, submit the computing job to the system: "qsub ./run.sh"
. This should return almost immediately with a message providing the allocated job number. This job number is useful to delete the job "qdel {job_number}"
or querry its status "qstat {job_number}"
(or "qstat"
to see for all jobs).
If the job submission fails with an error message such as unknown command, please check that there is no extra "#$" in the script. This happens frequently when commenting out some part of the script and is mis-interpreted by SGE. In such a case, simply add an extra "#" in front of the comment.
Alpine3D needs spatially interpolated forcings for each grid points. Unfortunately, it limits the choice of forcing parameters: for some parameters (such as HS or RSWR), there are no reliable interpolation methods. One way to make use of the existing measurements that could not be easily interpolated is to run a Snowpack simulation at the stations that provided these measurements, then use alternate, computed parameters (such as PSUM or ISWR) as inputs to Alpine3D.
This process is made easier by writing Snowpack's outputs in the smet format and making sure all the necessary parameters are written out. This means that Snowpack should be configured along the following lines (only using one slope):
Then the output smet files produced by Snowpack can be directly used as Alpine3D inputs, performing some on-the-fly renaming and conversions (here, from split precipitation to precipitation/phase):
Of course, other stations can also be part of the meteo input and their inputs should remain unaffected (assuming they don't use parameter names such as MS_Snow, MS_Rain or HS_meas and assuming that their parameters are not rejected by the KEEP command).