urtapm: Merging dumps and solving revisiting absolute gravimeter adjustment problems

We have a new branch of production that comprises
1. urtapt analysis by campaign with alternatives concerning drift and weighting
2. urtapm
3. merged solution by urtapt
plus some new post-processing utilities for data extraction from tp-files, for statistics and plot
Read the manual for   urtap-merge-pp

See the Questions-and-answers section. Includes an important memo how to avoid rate biases.

Read on for instructions concerning urtapm and merging of campaigns.
Shortcut:  For analysis- and drift variants, environment parameters $VAR and  $DVAR , see here.

Four stages of analysis and after-math, carried out by
super-script  do_merging_job
Below we explain:
1.     urtapt for each campaign separately
2.     urtapm to merge design matrices and control parameters
2.1     recycle urtapm for correct tp-indexing (mandatory!) and, optionally, a-priori offsets
3.     urtapt on the merged matrix
3.1   do-merging-job example, will generate result tables and residual files for all project
3.2   the script make-tse4urtapm
4.     preparing a plot
do_merging_job can be used also after urtap-merging-pp if directive U remains unused in do_merging_job

Utility: tp2tsf

Further post-merging file processing

Making plots:
1. residuals      -   plot-merged-residual
2. predictions - plot-merged-predictions

1. Create the instruction file with option -for-merge

make-urtap-ins -f -for-merge -cn z -C 201405a -with-drift -keep-outliers
set mark = ( a b c d e f g h i j k l m n o p )

set mark = ( `awk -v n=$#campaigns 'BEGIN{for(i=97;i<97+n;i++){printf "%c ",i};printf "\n"}'` )
rm -f campaigns4merge.lst
@ i = 0
foreach c ( $campaigns )
   @ i ++
   make-urtap-ins -f -for-merge -cn $mark[$i] -C $c -with-drift -keep-outliers
      (avoid -slopes-from-start !) for a sequence of campaigns,
      and for each campaign (select a subset):
urtapt @ urtap-<campaign>-mrg.ins >! urtap-<campaign>-mrg.log
   If you have an urtapm.ins file already,
set campaigns = ( `sed -n '/ \^ /s/[^-]*.\([^\.-]*\).*/\1/p' urtapm.ins` )
       which takes the campaign names from the file names (between `t/urtap-´ and `.cdmp´)
set campaigns = ( `cat campaigns4merge.lst` )

foreach c
( `cat campaigns4merge.lst` )
urtapt @ urtap-$c-mrg.ins >! urtap-$c-mrg.log

2. Edit the urtapm.ins file:

09 B t/features.tbl
11 ^ t/urtap-201405a.cdmp

12 ^ t/urtap-201502b.cdmp [{meter|-}] [options]
29 ^ t/urtap-yymmddz.cdmp    last possible
31 < t/urtap-merged.cdmp
32 < t/urtap-merged.y.ts     optional output, merged AG DATA input series (use UNW!)
33 < t/urtap-merged.dw.ts  - optional output, merged AG SIGMA series
SCG   => SCG
-W    => -W
BEGIN_HS: 01 02
01: SCG
02: A...,SL..
= ( From 2014 04 05 01 02 03 000 To 2014 04 08 23 59 59 999 ) = F233 : FG5-233  "FG5-233 Instrument offset all campaigns"
/ ( From 2009 06 30 00 00 00 000 To 2016 09 30 23 59 59 999 ) = SSCG            "Residual slope all campaigns"
S d/syn-drift.ts = DRFT : "Drift all time"
R /home/hgs/TD/d/G1_garb_######-1s.mc SCL=1/-774.421
    Coding rules follow below

is a symlink to a drift file. In mc4campaign a range of options are available. However, in the example below we use the result of expfit .

For synthesizing a drift-ts file, consider
prl2tse, esp. the bespoke example
The advantage w.r.t. using the .ph04.ts file is that the start and end times can be extended.
The resulting drift-ts needs the resample-and-create-column directive D in urtapm's ins-file.  Example in
~/TD/a/Allcamps/urtapm-big-OO-ndr-fsg-H-ochy.ins :

D d/recent-urtap$DVAR.ts      COL=y  SCL=1.0   ADD=0.0

where d/recent-urtap$DVAR.ts is a symlink to the synthetic drift-ts file and $DVAR is -ochy 
    Consider especially urtapm-big-O.ins

urtapm @ urtapm-big-O.ins :U >! urtapm$VAR.log

2.1 Recycle urtapm

For two reasons, the first urtapm job must be recycled. First, to use the feature table for a correct time-to-index set for bias parameters (orientation-dependent offsets to be estimated). Second, you might have a priori values for time-dependent instrument offsets to be subtracted in a series of campaigns or projects.

If features change, run

make-tse4urtapm -new urtapm-big$UVAR.ins
where the ins-file is associated with the particular analysis variant.
The script makes suggestions what to do with the output.

The details are covered under Drift above and in chapters   
Writing tse-files for the E-command  and   Writing ts-files for the A-command
( make-offsets-ts -h ; make-ts4VAR -h )
It's the files appearing in the  E and A commands that must be renewed.

3. Use urtap-merged.ins

rm -f o/scg-cal-merged$VAR.tse
touch o/scg-cal-merged$VAR.tse
urtapt @ urtap-merged.ins >! urtap-merged$VAR.log
The steps urtapm and urtapt are first in the super-script  do-merging-job
followed by preparations for plotting nicely decorated residuals
You cannot:
    Output predictions and residuals
    Use the outlier tse-file (re-iteration is not implemented yet, but highly (?) desirable)
    Include tidal signals

3.1 Do the merging job. In this example we assume the urtapm stage has been passed, so:
do-merging-job -u urtap-merged-O.ins -i urtapm-big-O.ins D
     The files this job creates are
-rw-rw-r-- 1 hgs hgs    3902 Apr 15 11:12 evaluate-tp-O-expf.tsj
-rw-rw-r-- 1 hgs hgs   28252 Apr 15 11:12 evaluate-tp-O-expf.tbl
-rw-rw-r-- 1 hgs hgs   18018 Apr 15 11:12 evaluate-tp-O-expf.dat
-rw-rw-r-- 1 hgs hgs   16926 Apr 15 11:12 evaluate-tp-O-expf.rsl
-rw-rw-r-- 1 hgs hgs    1510 Apr 15 11:11 xtp-projects.dat
-rw-rw-r-- 1 hgs hgs    3754 Apr 15 11:11 evaluate-tp-O-expf.sol
-rw-rw-r-- 1 hgs hgs    1519 Apr 15 11:11 projects-in-urtapm-O-expf.lst
    and in o/
-rw-rw-r-- 1 hgs hgs 2177180 Apr 15 11:12 o/scg-cal-merged-O-expf.dc.mc
-rw-rw-r-- 1 hgs hgs   77324 Apr 15 11:12 o/xtp-Onsala_AC_20150509a-O-expf.ra.ts
-rw-rw-r-- 1 hgs hgs   75968 Apr 15 11:12 o/xtp-Onsala_AC_20150508a-O-expf.ra.ts
etc for all projects

4. After-math: Computing predictions
   Use prl2tse
prl2tse o/scg-cal-merged.prl MERGED >! tmp.tse
   Edit tmp.tse, select the lines you want to combine, e.g. to generate only BoxCars of BoxCars and Slopes.
   Run tslist for the temporal scope of one campaign at a time.
   The following example is for a four days campaign with output sampling interval 0.1 h
tslist _1920 -Bidate,time -r0.1 -Etmp.tse,M -I -o my-first-prediction.ts
  Studying the variance-covariance matrix
Use plot-vcvf
, e.g.
  plot-vcvf -zfloor 0.01 -ps urtap-merged-OO-ndr-H-ochy.ps          \
            -size 0.36 -chs 12 -P ~/www/4me/ag-superc/ -X 3.0       \
            -ft t/feature-OO-ndr-H.tbl                              \
            -corr 1,2,3,4,5,84,85,86,87,88 1,2,3,4,5,84,85,86,87,88 \

Condump is the entity introduced for this purpose. The design matrix B, time series Y (and weights W) are output
to a big binary file accompanied with a lot of structuring parameters. The data is output in urtapt right after
call conmrs, i.e. after deleting all rows where there is missing data (in at least one column of B or in Y). 

Feature is an entity to identify pieces of signal that are to append already existing columns of B.
The primary purpose is to subject revisits of the same pillar in one of two orientations to the same adjustment parameter.
In principle it is possible to distinguish instruments.
   The features are introduced in the Tide Bands section, in the program
named TSYS, those four-letter symbols that previously were hard-linked to the design matrix columns
(it's not so straight-forward; one symbol is attached to both the sine- and the cosine column of a tidal band.
As long as we don't do tides here it won't bother us).
   Now we are filling also a common area of features: /cwbandt/.
/ccband/ contains commnt, expltext and
contains the timing parameters of each feature
(see p/wbands.fh and p/wbands.fp)
   The taks of urtapm is to combine some features and keep others separate.

Rules for coding urtapm.ins

File units 11 through 29 are reserved for condump input.
File unit 31 is for output.
File unit 32, if opened, receives the observation series (BIN .ts)
File unit 33, if opened, receives the data weights (inverse-weights to be precise).

In the open-file statements for units 11 - 29 the file comment part may be used for specification of AG-meter type, Campaign name, and other options.
But remember, comment and file name together cannot exceed 127 characters.
uu ^ cdmp-filename [{meter|-}] [:campaign] [{+|-}FHS ] [WS=val]

where option +FHS indicates that the dump has been written with variable-size hand-selected sets
(option -FHS is needed in the fixed-size case when the default has been changed with the MANYHS option/command).
Similarly, also for the output cdmp on unit 31, the {+|-}FHS  option can be given in the comment part of the open-file line.

:campaign - If the file name contains the name of the campaign, this is the preferred method to communicate it to the program.
If a campaign is given in the file comment, the file name will not be parsed for it.

20 ^ t/urtap-201405b.cdmp FG5-220
will use 201405b as the campaign name, the rule being that the campaign string is coded between `-´ and `-´ or between `-´ and `.´

WS=v    -  this option can be used to change the a priori weights: Series w is multiplied with u and the (weighted) measurements are divided with u 

Outlier editing is questionable. The pre-runs should have detected the worst cases.
If noise levels are different between campaigns, the noiser campaign will lose much data. Thus, this option might lead to sacrificing more data than necessary. A much higher rejection level could be used. 

The columns of the design matrix are associated with four-character symbols. Each of the urtap jobs whose output is to be processed here show the symbols in the result sheet (o/*.prl) .
; text                   - a comment line, ignored in processing

[NO]GRA [UNW] [IUNTBL=#u] [DTS=#dt] [DBG] [MANYHS] [DCOPT={W|U|I}[,<symb>]]

                                             GRA  or NOGRA must appear first on the line, else the rest is ignored.
      UNW             - undo the weighting of the dump series after processing and writing of the merged dump.
                      Necessary for writing the AG input series (file output on unit 32)
      IUNTBL=#u        - u = unit number for the features table (output). A file instruction must appear
                       in the open-file block. Default output is to STDOUT.
      DTS=#dt       - enforce a sampling interval [10s]. Important only if consistent in- and -output in this respect is required.
                       see OUTPUT command.
      DBG                 - Debug, extensive printing (current version of urtapu8.f does excessive printing, 2016-11-30)
      MANYHS            - Change default from fixed to variable size of hand-selected sets in cdmp file input.
      DCOPT=c       - Remove DC-components in the system matrix. Method U ignores any weighting, which is
                                                the preferred method given that the matrix columns contain "smooth" (i.e. deterministic) signals.
                                                An all-and-pure DC-column must keep its values. The symbol for this column is  `WWWW´  by default.
                                                You can alter it by appending `,<symb>´

WEED: #ythresh #wthresh #wlow
                                           - sets matrix elements Bij to zero and sigmai to wlow if  |Bij| > ythresh  or  wi > wthresh
SIGMA*#s            - multiplies sigmas (weights = 1/sigma) with s

TRANSLATE: #n             - specifies the number of instructions following (pure comments not counted)
ABCE => ABCD               - Input symbol ABCE is changed to ABCD
-W   => AANA
SYMBOLS: #n                  - specifies the number of instructions following (pure comments not counted)
ABCD => WXYZ               - The following two features are not combined under the name WXYZ
ABCE => WXYZ               -  although r.h.s. symbols may be synonymous, they result in different columns.
ABCD,ABCE[,...] => WXYZ 
                       -  Combine columns; may contain wildcards on l.h.s. Up to 10 column symbols can be given on the l.h.s.
DE.. => DEAB               - All features having DE in common are combined
.E.. => DEAC               - All features having E in the second place are combined
XYZ. =v forget        - Drop all features starting with XYZ (not fully implemented, collides with feature handling)
AA..[,...] !> AAAA             
                      - The `!´ sign requests that overlapping features are poked together, not added
                    `&´ is synonymous with `!´ . Recommended for the global constant:
                    -W.. !> WWWW        
-W.. !> ####               - Combine symbols starting with -W and throw away.

DEL[ETE:] symb[,symb[,...]]
- Delete these columns and their associated features. Comma-separated and four characters
                     for each symbol. Blanks are significant. Trailing (!) wildcard character(s): `.´, e.g. -W..
Comments starting with `;´ are possible in separate lines or beyond the instructions.
The purpose of the SYMBOLS section is to collect segments of the signal model ("features") under one column, i.e. to estimate one parameter
in the urtapt merged-dump stage. TRANSLATE may help in the combination.
Features not declared in this table will not be merged eventhough they might have identical names.
The result sheet of urtapt will show identical TSYS, but with the namelist parameter
tsf-edit command lines will be produced that specify the time scope of each feature correctly.

Hand selection
BEGIN_HS: 01 02
02: A...,SL..
This example opens two categories (01, 02) to include the SCG-series: measurement and drift in 01, and the project biases and slopes in 02.
= ( From YYYY MM DD HH MM SS FFF  To  YYYY MM DD HH MM SS FFF ) = TSYS   : meter   ["explain"]
/ ( From YYYY MM DD HH MM SS FFF  To  YYYY MM DD HH MM SS FFF ) = TSYS [ : meter ] ["explain"]
S filename = TSYS 
R /home/hgs/TD/d/G1_garb_######-1s.mc
[ ADD=#v] [ SCL=[1/]#cal] [ COL=#c]
R filename-model                      [ ADD=#v] [ SCL=
[1/]#v]   [ COL=#c]
D filename             
              [ ADD=#v] [ SCL=[1/]#v]   [ COL=#c]
A ts-file
                            [ ADD=#v] [ SCL=[1/]#v]   [ COL=#c]
A ts-file                       [ ADD=#v] [ SCL=[1/]#v] COL=0   = TSYS [ : meter ] ["explain"]
E tse-file,trg                                          COL=#c
  = TSYS             ["explain"]
=   This is a method to add instrument-specific offsets to the design matrix.
/   This is a method to solve for a trend for the entire scope of campaigns.
S       Add a column by sampling a long-duration ts-file, e.g. for drift data.
         The hash-characters designate a date YYMMDD that will be issued internally to identify the files as needed.
         You can place a label request after the file name, -Llabel , and a debug option +DBG  
R       Replace a column by sampling 1-s.ts files.  
         Default column = 1, scale = 1.0, default additive value = 0.0
         Additive value in [nm/s2] is applied before scaling.
         Label and debug options as under S
D   Add the series in the file to the indicated column (default c=1)
A       Add a tp-compressed ts-file to the indicated column.
= 0: add a new column.  c = -9: Add to the weights column.
E       Edit a column with tsf_edit. c = 0: add a new column.
         The time-ranges must be given in sample index #n.
         Use the the feature table in the urtapm-log (or e.g. t/feature*.tbl output)  or  t/urtap-merged.lst  to find out.
         Another method to inspect the tp's of a project is here: Writing tse-files for the E-command

The time ranges should encompass one campaign at a time. The underlined text including whitespace is mandatory.
There are (presently) no defaults for the From- and To-dates.

(Still another suggestion: generate a column using tsf-edit)
E file.tse,TARGET = TSYS  ["explain"]
use log-output of urtapm to find the index:
ADD 1. From  #1194 To  #1392
ADD 1. From #41279 To #42476

With a one-liner: First FG233 then FG220
awk '/ACSA/||/AASA/||/ASSA/||/ANSA/{print $0}' urtapm-big-AUNO.log |\
 fgrep -v DumMer | fgrep Onsala_ | sort -n -k5 |\
  awk '{printf "ADD 1.0 From #%i To #%i\n",$5,$6}' | uniq
(You need to remove the FG233 experiment of 2010!)
awk '/ACSA/||/AASA/||/ASSA/||/ANSA/{print $0}' urtapm-big-AUNO.log |\
 fgrep -v DumMer | grep -e ' A._2' | sort -n -k5 |\
  awk '{printf "ADD 1.0 From #%i To #%i\n",$5,$6}' | uniq
(remember: all added columns must also go through weighting).

An optional section can be added:
OUTPUT [COLUMNS U=#uu] [DT=#dt] [+UNW] [{+|-}FHS[+DBG]] [+MJD]
where symbols  symb  select matrix columns for output on file unit uu .
Specify lines with symbols only after declaration of an output unit in the open-file block. File is BIN .mc
A column with  Mod.Julian Date can be written; specify +MJD
The data (except MJD of course) will be unweighted if option +UNW is specified.

Options +FHS  or  -FHS  determine whether fixed or variable size hand-selected sets are included in the merged dump output;
if not given, the current state is not changed. If  neither  MANYHS  nor  +FHS  has been given anywhere before, the default, fixed size, is used.
The urtapt job that follows needs consistent information by way of the namelist, parameter
q_hs_as_needed = .true. is consistent with +FHS and/or  MANYHS , and vice-versa.
Option {+|-}FHS+DBG runs dump_crecb with a debug option, so you can check the hand_select control parameters.

Option  DT=#dt enforces a sampling rate on the output files.
Remember that this is only a nuisance parameter here; however, different sampling intervals may have adverse effects
in post-processing files from different stages of the analysis. 

Writing tse-files for the E-command

Create an mc-file from the output jd.ts and wg.ts files:
rm -f t/urtap-merged$VAR.jd.mc
tslist t/
urtap-merged$VAR.jd.ts -I -O:MJD t/urtap-merged$VAR.jd.mc
tslist t/urtap-merged$UVAR.y.ts -I -O:Y t/urtap-merged$VAR.jd.mc
List projects and their settings using
tplist-project -qq 201707a ALL t/urtap-merged$VAR.jd.mc
Onsala_AA_20170705a.drop.txt FG5-233 AA N -LWG 172971 176767
Onsala_AA_20170706a.drop.txt FG5-233 AA S -LWG 176768 180882

Find the MJD's:
tslist o/scg-cal-merged-O-sdr-dsyn.jd.mc -qqq -j172970 -Un176767 -Ft1,f13.6 | awk '(NR==1){print} {s=$0} END{print s}'

tslist o/scg-cal-merged-O-sdr-dsyn.jd.mc -qqq -j176768 -Un180882 -Ft1,f13.6 | awk '(NR==1){print} {s=$0} END{print s}'

Example: A tsfedit section for FG-233 S orientation
rm -f cprj$VAR.lst; touch cprj$VAR.lst
foreach c ( $campaigns )
   tplist-project -qq $c ALL
t/urtap-merged$VAR.jd.mc | tee -a cprj$VAR.lst

awk '/-233/&&/ S /{print "ADD 1.0 From #"$6,"To #"$7,"; 233 S ::",$1}' cprj

awk '\! /^;/&&/-220/&&/ S /{print "ADD 1.0 From #"$6,"To #"$7,"; 233 S ::",$1}' cprj$VAR.lst

foreach c ( $campaigns )
tplist-project -qq $c ALL t/urtap-merged$VAR.jd.mc | grep -v -e '; ' |\
     awk -v n=$c '(NR==1){s=$6} END{print "From #"s,"To #"$7,"::",n}'
There is a script for that. Try
make-tse4urtapm [-new] [<urtapm-ins-file>]
edit the result if necessary and move to the subdir as required (would usually be the urtapm launching dir)

Writing ts-files for the A-command

A-files for campaign-related offsets can be written using the YOFFS section in the tse-file produced by  make-tse4urtapm
You might have a set of contributions you'd like to handle differently. The YOFFS section can be invoked with an E-command.
For additional effects you need to assign values to environment parameters YOFFS_$campaign
and tslist like below, however with option  -E urtapm$VAR.tse,YOFFS

The following considers offsets for specific projects. Preliminarily, in lieue of a (super-)script
starting with an example
echo "TSF EDIT ADD" >! tmp/adding.tse
set c=201505a
tplist-project -v -OO-ndr -qq $c ALL t/urtap-merged-OO-ndr.jd.mc |\
 awk -v c=$c '{s=$1;sub(/\..*/,"",s);print "ADD ${Z_"s":[0.0]} From #"$6,"To #"$7;if (! b){b=$6}; e=$7} END{print "ADD ${C_"c":[0.0]} From #"b,"To #"e}' \
   >> tmp/adding.tse
echo "END" >> tmp/adding.tse
The file  cprj$VAR.lst  can be used.
The following could be inside a shell script:
setenv Z_Onsala_AA_20150506a #value
setenv C_201505a #value

set ndata=`tslq -v t/urtap-merged-OO-ndr.jd.ts | awk '{print $3}'`
set begd=`tslq -b t/urtap-merged-OO-ndr.jd.ts`
tslist _$ndata -I -B$begd -rs5.0 -E tmp/adding.tse,ADD -o the-afile.ts
There's a script for this purpose:
   Using the results of international AG campaigns (hand-written offsets2env  or a working copy in tmp/ ) and the output of  make-tse4urtapm , urtapm$VAR.tse
make-ts4VAR +using tmp/offsets2env -E tmp/urtapm$VAR.tse,YOFFS -o tmp/meters-180526$VAR.ts
   Analysing the skewness of the drop residuals by project using  stats-of-merged DE , o/yoffs4prjs.tse , producing a series of corrections
[ setenv YOFFS_SCALE value ;] make-ts4VAR -E 'o/yoffs4prjs.tse,],Skip' -o tmp/tmp$VAR.ts

ADD ${Z_Onsala_AA_20150506a:[0.0]} From #107165 To #111595
ADD ${Z_Onsala_AA_20150507a:[0.0]} From #111596 To #116223
ADD ${Z_Onsala_AC_20150508a:[0.0]} From #116224 To #125744
ADD ${Z_Onsala_AC_20150509a:[0.0]} From #125745 To #135279
ADD ${C_201505a:[0.0]} From #107165 To #135279

The job for urtapt with the merged dumps

Here is the urtap instruction file for solving the merged problem:
(a lot of definitions are redundant / not effecting. In gray are preliminary suspects).
There is currently no use of the "Hand Selected" combinations since BIN-file time series cannot be produced.
   C Analyze merged calibration campaigns
11 < t/urtap.dmp
12 < t/urtap-merged.evs
13 B t/urtap.trs
14 B o/scg-cal-merged.prl
17 < t/urtap-merged.cdmp
 iun_condump=17, opt_condump='GET UNW'
 qbatch=.true., use_window_residual='F', use_eigenvalues='W 6.0 G '
 cause='TGP', demodulation_tide='2,1,1,-1,0,0,0,0'
 iuin=21, trg='BIN', dtut=57.d0, scale_y=1.0, y_units='[nm/s^2]$'
 rec_mrs=-99999.9, fmt='L:A|V'
 q_tsf_edit=.true., tsf_edit_name='OS '

 dff=1.0d0,-0.9d0, l_dff=0, l_filter=0
 ls_psp=128, wt_psp='HANN', mxmiss_psp=32

 display='A@W', q_shut_graphics=.true.
 tsfile_names='o/scg-cal-merged '
 q_frq_tst=.false., frq_tst=1.0d0
 q_filterb_remdc=.false., q_keep_y_dc=.true.
 q_allow_dc=.true., q_weights_dc=.true.
 q_remdc_wmean=.true.,  q_ra_with_dc=.true.
 opt_hand_select_trend = 'No....'
 npal1=99, npal2=1
 outlier_criterion=-4.0, iun_outliers=31
 q_tref_slopes_start=.true., q_offset_slopes=.true.
 quit_offset_slopes=.false., tsys_slopes='SL..'

End of Instructions _______________________________________________

; +21,'BIN',-99999.0,'L:S|D',3,0,-1, -1.0
CONT 31 O o/scg-cal-merged.tse


For analysis- and drift variants, environment parameters $VAR and  $DVAR , see here.

Advanced: For a series of solutions in which the offsets from the international comparison campaigns are varied, see here. 

If you consider to estimate instrument-specific offsets, then
The BoxCar features must be labeled differently, like A... and B....  and a BoxCar BAN0 must be created for the offset at the reference monument. This amounts to hand work.
Create a new urtapm.ins file for this purpose.
The maximum number of columns might exceed 100, which will require a review of array sizes, some are dimensioned 100 .

Two commented examples:
Job 1. The exploring job (where to introduce BoxCars for South versus North orientation
Job 2. The job with super-campaign wide solutions for meter-specific orientation offsets

Job 1. urtapm-big-x.ins
11 ^ t/urtap-200907a.cdmp FG5-233   ; take care to import the same dumps as in job 2!
12 ^ t/urtap-200911a.cdmp FG5-220
   ; File comment = meter id is essential
13 ^ t/urtap-201004a.cdmp FG5-220
14 ^ t/urtap-201009a.cdmp FG5-233
15 ^ t/urtap-201006a.cdmp FG5-233
17 ^ t/urtap-201106a.cdmp FG5-233
18 ^ t/urtap-201106b.cdmp FG5-220
19 ^ t/urtap-201106c.cdmp FG5-220
20 ^ t/urtap-201304a.cdmp FG5-233
21 ^ t/urtap-201405a.cdmp FG5-233
22 ^ t/urtap-201405b.cdmp FG5-220
23 ^ t/urtap-201502b.cdmp FG5-220
31 < t/urtap-merged-x.cdmp
09 B t/feature-x.tbl                ; this is the essential result of this job!
SYMBOLS: 9                          ; note that symbols are four characters long!
-W   > -W
BEGIN_HS: 01 02 03 04
01: SCG.
02: A...,SL..
03: A...,SL..,F233,SSCG
04: F233

= ( From 2011 01 01 00 00 00 00 To 2014 07 01 00 00 00 00 ) = O233 : FG5-233  "FG5-233 Instrument offset from 2011"
= ( From 2009 05 01 00 00 00 00 To 2014 07 01 00 00 00 00 ) = F233 : FG5-233  "FG5-233 Instrument offset all campaigns"
/ ( From 2009 05 01 00 00 00 00 To 2015 04 01 00 00 00 00 ) = SSCG            "Residual slope all campaigns"
S d/syn-drift.ts = DRFT : "Drift all time"
R /home/hgs/TD/d/G1_garb_######-1s.mc SCL=1/-774.421
Job 1.a :    orientations-in-merged S -f t/feature-x.tbl -M 220
This job is picky as to the exact column placement of the feature file!
Update urtapm.tse

Job 2.  urtapm-big-O.ins
11 ^ t/urtap-200907a.cdmp FG5-233  ; File comment = meter id is essential
12 ^ t/urtap-200911a.cdmp FG5-220
13 ^ t/urtap-201004a.cdmp FG5-220
14 ^ t/urtap-201009a.cdmp FG5-233
15 ^ t/urtap-201006a.cdmp FG5-233
17 ^ t/urtap-201106a.cdmp FG5-233
18 ^ t/urtap-201106b.cdmp FG5-220
19 ^ t/urtap-201106c.cdmp FG5-220
20 ^ t/urtap-201304a.cdmp FG5-233
21 ^ t/urtap-201405a.cdmp FG5-233
22 ^ t/urtap-201405b.cdmp FG5-220
23 ^ t/urtap-201502b.cdmp FG5-220
31 < t/urtap-merged${VAR}.cdmp      ; setenv VAR -O
09 B t/feature${VAR}.tbl
SCG  => SCG              ; must be introduced; it will collect SCG-data for both meters
AA..,-W   !> AAAA        ; `!´ because of possible overlap of -W and AAS. : poke don't add
AC.. => ACAA             ;
We are combining all orientations on each platform
AS.. => ASAA
AN.. => ANAA
BEGIN_HS: 01 02 03 04
01: SCG.
02: A...,SL..
03: A...,SL..,F233,SSCG
04: F233

= ( From 2011 01 01 00 00 00 00 To 2014 07 01 00 00 00 00 ) = O233 : FG5-233  "FG5-233 Instrument offset from 2011"
= ( From 2009 05 01 00 00 00 00 To 2014 07 01 00 00 00 00 ) = F233 : FG5-233  "FG5-233 Instrument offset all campaigns"
/ ( From 2009 05 01 00 00 00 00 To 2015 04 01 00 00 00 00 ) = SSCG            "Residual slope all campaigns"
S d/syn-drift.ts = DRFT : "Drift all time"
R /home/hgs/TD/d/G1_garb_######-1s.mc SCL=1/-774.421 ADD=-297.95               ; with this ADD-value the AAAA-parameter becomes 0.0
E urtapm.tse,F233S COL=0                                    = S233            "FG5-233 South" ; use the BoxCars of the pervious job
E urtapm.tse,F220S COL=0                                    = S220            "FG5-220 South"

Analysis  variants VAR :
This is the status as of early 2017
ldr  - a late bias, for the skewed distribution of drop residuals in 2016
ldr3 - excludes  t/urtap-201106b$DVAR.cdmp
lsdr - all input cdmp's are not drift-reduced. Instead, a "synth" drift series is sampled and an admittance parameter estimated
sdr  - like lsdr , but with a bias for the campaign in 2009
ndr  - ins file issues a comment: "Please setenv VAR -O-expfit-ndr" . A bias for the problematic campaign in 2016 is estimated. 
In Sep. 2018:
ndr - With this strategy we assume the drift series as fixed (note. the data is a correction).
  What is the preferred application, subtracting it from the AG-observations or adding it to the SG? 
  Studying the vcv-plots it turns out that the covariances change, especially those with meter-related offsets;
  covariance of the scale factor with the offset of FG5-220 is near zero!
  And the scale factor is closer to the value from the  GAIN campaign if we do the latter. (i.e. adding drift to SG).     

Drift variants  DVAR :
This is the status as of early 2017
dsyn - synth drift, composed of the bias+slope+decay parameters from the SG standard analysis
dwwf - synth drift, composed of the bias+slope+decay parameters from the SG extended analysis
ochy - synth drift, composed of the bias+slope+decay parameters from the SG extended analysis with ECCO1 and ERAI in the regression
expf - synth drift with exponents, biases and slopes estimated in jobs running expfitbm

Varying the offsets determined in international comparison campaigns

Introduce an environment parameter $OVAR in each a dedicated urtapm-ins and urtapt-ins file.
Examples are in  urtapm-big-OO-ndr-H-ochy.ins and  urtap-merged-O-ndr.ins 
Copy the offset script varoffsets2env to tmp/ and edit.
Set the necessary environment variable as indicated in the ins-files.
Source the super-script (note: it uses the copy in tmp/):
source  do-offset-variations
The superscript contains a few one-liners to grab the results from the prl-files.
The process involves a tsf-edit file [tmp/]urtapm$VAR.tse that specifies the campaign indexes à la tp_file. The tsf-edit file must be renewed if the campaigns change in length or ever so tiny data parts are taken out or introduced. To this end we have a tool that build the file from the feature table:
To include these offsets in the analysis, urtapm must include a command
A tmp/meters${OVAR}.ts
More  offsets can be appended by setting in the environment
setenv CONT_YOFFS "CONT ?? B tsi-file"
e.g. o/yoffs4prjs.tse
To create a tsi-file with commands for each campaign, issue
awk '/> Campaign/{n++;print "${COMMAND} ${ADD"n":[0.0]} From #"$6," To #"$6+$7-1,"::",$9,$5}' \
 <feature-table> >! tmp/additional_campaign-offs.tsi

# study the output and...


setenv ADD6 20.0
setenv ADD15 20.0
setenv CONT_YOFFS "36 R tmp/additional_campaign-offs.tsi'
set ndata = `tslqn t/urtap-merged$VAR.jd.ts`
set today = `date "+%y%m%d"`

tslist _$ndata -B2009,6,30 -rs5. -Etmp/urtapm$VAR.tse,YOFFS -I -o tmp/meters-ovspec.ts
(motivates another script)

A method to list the time series that urtapt generated from merged condumps

Introducing TP-MC-files

This is the pre-requisit for an MC-file to be a TP-MC-file:
MJD in column,
data of other kind pertaining to the same
point of time in columns 2...

 tp2tsf ra
creates an mc-file    o/scg-cal-merged.ra.mc  
And since the name tp2tsf is counter-intuitive, there's a symlink  ts2tpmc
tslist o/scg-cal-merged.mc -qqq -LM -LRA -N -n1 -Ff14.7,f10.1
     1.000000  55015.3890046     -25.5
     2.000000  55015.3891204      53.2
     3.000000  55015.3892361     -60.3

where M abbreviates MJD; (additional columns could be added manually (presently the script grants no options))
tslist o/scg-cal-merged.ph01.ts -I -O:PH01 o/scg-cal-merged.ra.mc
and a tsf-file  o/scg-cal-merged.ra.tsf
2009 07 03 09 20 10 55015.3890056     -25.5
2009 07 03 09 20 20 55015.3891214      53.2
2009 07 03 09 20 30 55015.3892371     -60.3

note the format  (i4,5i3,...)

If an ASCII-file on unit 18 has been written (t/urtap-merged.lst), it can be used to identify the sections pertaining to a project
d/AGSG_200907a_FG5-233-5s.mc                                     o/scg-cal-merged.$type.mc -BN0000001 -Un0006381
d/AGSG_200911a_FG5-220-10s.mc                                    o/scg-cal-merged.$type.mc -BN0006382 -Un0014141
d/AGSG_201004a_FG5-220-10s.mc                                    o/scg-cal-merged.$type.mc -BN0014142 -Un0019552
d/AGSG_201006a_FG5-233-10s.mc                                    o/scg-cal-merged.$type.mc -BN0019553 -Un0023349
d/AGSG_201405a_FG5-233-10s.mc                                    o/scg-cal-merged.$type.mc -BN0023350 -Un0048947
d/AGSG_201405b_FG5-220-5s.mc                                     o/scg-cal-merged.$type.mc -BN0048948 -Un0053531
d/AGSG_201502b_11P-FG5-220-10s.mc                                o/scg-cal-merged.$type.mc -BN0053532 -Un0064204

Further post-merging file processing

(to reduce platform offsets in  *.pa.mc  or  *.phnn  files)
tp-mc-file -LM -Ldata -TPSs20

Have a look at

   (alt1)   tp2ts <type> -by-campaign > cmd.file
   (alt2)   tp2ts [opt] <type> -by-project <mcfile in merged.lst> ... > cmd.file
   (alt3)   tp2ts [opt] <type> -by-project ALL t/urtap-merged.lst > cmd.file
    then:   source cmd.file
            After  urtapt @ urtap-merged.ins  this script splits the
            Super-campaign wide output into either campaign-wide segments
            or project-wide short sections.
            More precisely: Command lines are issued that can be collected
            in a prospective shell-source file. Running them directly could
            cause havoc or simply be an overkill; check the commands carefully.
 -collect - Collect project series by TSYS into one tsf file (and one ts-file)
 <type>   - The (usually) two-letter file name part, e.g. ra for raw residual,
            or ph01 for the "hand-selected" mix of predictions 01
            With -by-project, $type is not resolved, so before applying the
            commands you must issue  set type = <type>
            With -by-campaign, $type is resolved. If you want to create a
            type-unspecific set of commands you can supply the quoted string
            '$type' in tp2ts, but must both issue (tcsh)
            set type = <type> 
            set TYPE = upper-case of <type>.    
 PREREQUISITE:  tp2tsf <type>
            No distinction yet on the part of the word 'merged' in the
            file names nor 'scg-cal' nor I/O catalog o/
            Add options as need arises.

            tp2ts -collect ra -by-project ALL t/urtap-merged.lst > ! ra-by-project-coll.csh


Q.: How to get the date of a record at position #n in a file?
A.: tp2date #n [#m ...]          (tp2date is in a/Allcamps)
tp2date 52944 52945
2014 05 31 00 05 30
2014 05 31 00 06 00
Q.: How to get the begin and end of a feature in terms of the array index?
A.: No answer yet. urtapt should output the MJD of each feature at the pre-merging stage.
    We have a quick fix:
    list-features urtap-*-mrg.ins >! features-MJD.lst
    but we still need a script that combines
    1. searching for feature F
    2. read the o/scg-cal-merged.jd.ts
    3. listing the record number of the closest matching MJD.
Q.: How to generate columns in a t-point file?
A.: New option in tslist: -RTP <MJD-ascii-file>
    See example in  do-merging-job

Q.: How to get a record set belonging to a certain date from a tp-mc-file ?
A.: tslist <tp-mc-file> -qqq -LM -LR -Ft1,f13.5,10.2 -cv`jdc -k3 -d yyyy-mm-dd hh:mm:ss`,#accuracy,1

    tslist o/scg-cal-merged-AUN.ra.mc -qqq -LM -LR -Ft1,f13.7,f10.2 -cv`jdc -k3 -d 2009 07 04 09 32 00`,0.001,1,
    tslist o/scg-cal-merged-AUN.ra.mc -qqq -LM -LR -Ft1,f13.7,f10.2 -cv`jdc -k3 -Dy 2009 07 04 09 32 00`,0.00003,1 -E1:tp.tse,Y

Q.: How to condense the output of a set (or project) DCLevel (or RMS)?
A.: Example for RMS in column 4:
    tslist o/scg-cal-merged$VAR.ra.mc -LM -LR -LD -Etp.tse,D -TPSh1,+F+RMS -Ft1,f12.6,1p,10e12.4 |\
           awk '\!/>/{t=$1;if (p!=$4){print t,$4;print $1,$4;s=$4;print ">"};t=$1;p=$4}'
0.000000 9.7065E+01
0.000000 9.7065E+01
1.281019 4.2204E+01
1.281019 4.2204E+01

    -LM -LR -LD    - import MJD RA and DW (weights) columns
-Etp.tse,D     - reduce time, shorten gaps
    -TPSh1,+F+RMS  - Project-wise RMS (breaks are longer than 1h)

Q.: How do we avoid rate biases in multi-campaign solutions?
The primary purpose of multi-campaign analysis is the determination of a secular rate of gravity.
Subtracting the SCG and adding back the instrumental drift is the method in mind, but the drift terms as they are determined in the adjustment may cancel out physical components in the form of a linear signal. Examples:
1. Nodal tides. This tide, if subtracted from the SCG series, removes also the linear ramp implied in the incomplete cycle. The signal to subtract should comprise all nodal effects, i.e. harmonic degrees 2, 3, 4...
We had a problem in including the degree > 2 tides in the least-squares adjustment, admitting these effects
at much greater magnitudes since the implicated slope makes a good fit to the SCG drift.
2. Time series of non-tidal effects, which participate in the adjustment. In some cases the admittance coefficient is not known in advance even at one order of magnitude. Pre-filtering should result in a fit dominated by the temporal variations, and Wiener filtering is supposed to improve the fit by attenuating short-period incoherent features. Still, there is potential leaking of secular terms in these signals into the drift terms.
A.: Case 2,
consider the possibility to split off secular rates from the effects that are adjusted. Then, by extrapolation of gain spectra you might obtain a reliable coefficient to account for the secular parts at the instances of urtapm or urtap-merged. You'll get ranges for biases - which also would have to be considered in AG-only analyses. Using sasm06 on the residual with a candidate effect added back in would suggest admittance extrapolated to very-long periods. A more comprehensive approach, however, would apply partial cross-spectrum analysis (à la Jenkins and Watts) involving almost ten simultaneous time series - something for the future!
The q_analyse_slopes option in urtap tide analysis provides an account budget of the slopes-times-admittance contributions implicated in the system matrix. In the case of the environmental effects the true contributions of the slope parts --- and thus the drift bias --- might be underestimated. However, the bias will not exceed what a total neglect of these effects would incur in AG-only analyses. 
As an example,
fgrep '<ASl' ~/Ttide/SCG/logs/urtap-openend-ochy-asl.log 
(But note: this is a version where the nodal tide is not subtracted; the drift parameters are still biased.)
    Case 1,
  cd ~/TD
  rm -f
  run_urtip -t t/nodal-2-3-4.trs -BM d/gnt${begd}-OPNEND-1h.ts d/g${begd}-OPNEND-1h.mc

subtract in urtapt using
49 ^


This script creates
If these files won't change for different VAR parameters,  mv  or cp it to a $VAR-less name and create symlinks for each $VAR
tmp/urtapm$VAR.tse  contains a tslist processing command that can be activated with
tsex -x tmp/urtapm$VAR.tse,WMEAN
to print the weighted means of each campaign. Look at this command to make sure the environment parameters are set appropriately:
tsex -t tmp/urtapm$VAR.tse,WMEAN


Making plots

There are better ways.

but this script is quite useful:
1. Residuals

 plot-campaign-means -h

    (defaults make use of environment parameters

2. plot-vcvf :
Two examples, showing the -ndr strategy, "-alt" when drift correction is added to the SG, else subtracted from the AG
Nowadays we use the "-alt" alternative without denoting it.

The command is
  plot-vcvf -zfloor 0.01 -ps urtap-merged-OO-ndr-H-ochy.ps          \
            -size 0.36 -chs 12 -P ~/www/4me/ag-superc/ -X 3.0       \
            -ft t/feature-OO-ndr-H.tbl                              \
            -corr 1,2,3,4,5,84,85,86,87,88 1,2,3,4,5,84,85,86,87,88 \
Residuals:  do-merging-job  will suggest a command
plot-merged-residual -DC o/scg-cal-merged-O-15.dc.mc o/scg-cal-merged-O-15.ra.mc

2. Predictions:
Predictions are calculated in urtap and what is to be collected for the plot should be a partial prediction.
The instructions are given to urtapm
The keyword is HS = Hand-selected sets
First you must create the tp-dated mc-file:
setenv TP2TSF_OFFSET `xtp-platform-dcl`
tp2tsf ph02

where the first line is needed if the drift model was not zero-mean.
plot-merged-predictions -R -A o/scg-cal-merged$VAR.ph02.mc
After every re-analysis with urtapm and urtap/merged the -R option must be given.
Else the tp-tsf-files can be re-used.