The SGC Monitor

WARNING! The file  SCG/monitor-plot.html must not be edited with an html-editor
It contains markup that must stay in the lines as is.
See documentation in the local file ~hgs/Manuals/SCG-monitor.txt on holt.

  1. In case of emergency
  2. Logic and Logistics
  3. The monitor page http://holt.oso.chalmers.se/hgs/SCG/monitor-plot.html
  4. Programs to start on Brimer | Elder | Holt
  5. Testing the agent scripts
  6. Controlled or uncontrolled rebooting of Elder (Utter too, with modifications)
  7. Retrospect - requesting historic monitor plots
  8. cron jobs
  9. memspec-vs-time
  10. Swapping Elder / Utter
For "Elder", alternately read "Utter" in this document.
One of these two machines runs service jobs for the SCG monitor.
Swapping them must be done carefully.
What machine is on duty?         Answer: http://holt.oso.chalmers.se/hgs/SCG/broker.name
Are the processes active?           Answer: Find out; look at its desktop. There ought to be at least one Xterm window in which services are run.

Check logs:

http://holt.oso.chalmers.se/hgs/SCG/SCG-html-agent-today.log 
http://holt.oso.chalmers.se/hgs/SCG/SCG-html-agent-yesterday.log
http://holt.oso.chalmers.se/hgs/SCG/SCG-html-agent-older.log
http://holt.oso.chalmers.se/hgs/SCG/monitor-plot.log


EMERGENCY - The process on Brimer has stopped, Elder and Holt don't get new data

Most common reason: Programs rolling.pl and udp-brc-file.pl fail to locate the actual record in Predicted Tides.

ACTION:
On Brimer:     Logon VNC as administrator
On Elder:    User HGS

   cd /home/SGA/bctemp

   kx
   broadcast-G1 &


   cd perlproc/SCG

   delgap # follow the instructions incl.
          # restarting the agents.

   source RESUME

If this does not help, the condition is new and must be investigated.

Without this, a big gap in Holt's monitor-plot.html will persist for one hour


The logic / logistics

On Brimer, the G1-file that UIPC is appending to is found by file's changing date.
Program `rolling´ takes the last 60 lines and copies them to a temporary file.
The udp agent `udp-brc-file.pl´ watches for the appearance of this file, and when ready, augments it with predicted tide data, formats it and sends it by udp broadcast.
The predicted tides must be computed on Holt in advance.

On Holt two html files are produced and filled with information, www/SCG/monitor.html (text) and ...monitor-plot.html (graphics).
Two agents are engaged, the udp broadcast server `SCG-file-agent.pl´ and the html poducer `file-html-agent.pl´
Holt's file agent produces a one-hour long (actually a 3600 lines long) file that is used by two processes:
Plotting the data (execution started within the file-html-agent); adding USGS earthquake notification (program `eqwatch.pl´)

On Elder, a second version of SCG-file-agent is running, producing a work file of indefinite length and a most-recent-3600 lines work file.
The long file is used to produce a plot of predicted tides and air pressure (one sample per hour, program `tide-press-plot´, csh); and computing a power spectrum once every six minutes, program `11-mems´, csh. (the name SCG-file-agent should be changed to avoid confusion; SCG-accu-agent for instance.)

Brimer needs predicted tides; they are used in the UDP broadcast. The calculation is done on Holt.

One weakness: UDP-streams are somewhat unreliable. Brimer would have to store the whole bunch of 60-lines files as a back-up. Upon data gaps, Holt and Elder would issue scp commands to fill the gaps. (This feature is actually installed. It has been rather successful).
 

The monitor page  http://holt.oso.chalmers.se/hgs/SCG/monitor-plot.html

Here is a snapshot from the monitor page, the end-product of the SCG-Monitor. If the current page looks conspicuously different, identify which of the programs
responsible for the contribution, has been failing.

monitor-plot is a csh script run under  SCG-html-agent.pl (/home/hgs/perlproc/).
eqwatch.pl is running indefinitely after  cd ~/perlproc; source START
tide-press-plot  is activated with cron (currently two times per day). Correction: the program that produces the plot is ~/TD/new-tpplot
11-mems is running indefinitely on Elder or Utter ; graphics are uploaded to Holt.

At the top of the page there are links to pages with content renewed routinely, e.g.
>Room Temps (top row, item 3) - croned jobs /home/hgs/wx/TEMP/get-monutemps  and  /home/hgs/wx/TEMP/plot/plot-monutemps
In the meantime more links have been added.
In the caption to Power spectrum,
>spectrograms vs time - results from a croned job /home/hgs/TD/actual-memsp-vs-time
There are more links on this page now than in 2013.

The Form part (Submit Query) involves a javascript / perlscript feature that activate programs in /home/hgs/let


Brimer:
The desktop with MS-Windows applications is usually blocked by GWR's login window. For security reasons we don't explain here what to do.
The following manoeuvrers concern the Cygwin subsystem on which the processes are running that take care of data broadcast (UDP).

Stop programs, e.g. before reboot:
Find the xterm window that presents a prompter below a line that had launched `broadcast-G1 &´ . Let's call this the launching window.
(Two other xterm windows should be visible; they don't allow input but show live messages from the UDP broadcast procedures: monitor windows.
A reason for reboot might be that the UPD broadcast process has failed. So it's a bit difficult to exactly anticipate what you will see.)
Enter:
. KX
The xterm windows associated with the broadcast monitor should now disappear.
At least, their content should stall; a warning might appear or a concluding message.
However, KX might not lead to anything sensible.
Then try to analyse the current situation with processes involving perl, xterm and bash:
ps | fgrep xterm
ps | fgrep bash
ps | fgrep perl
If this results in nothing, go on with reboot.
If  `ps | fgrep perl´ shows active jobs and active xterms are associated with them, try kill the perl jobs with
 touch STOP STOPSEND
and check again with `ps | fgrep perl´  that they have disappeared.
Those processes not involved with the terminal devices  ptyn  of console and launch include those monitor windows we want to remove and the associated processes.
If the listing from ps don't show any, go ahead with reboot. If not, try and apply commands
kill -KILL pid 
starting with the PID's of perl, then bash, and finally xterm. If the xterms are stuck, don't mind; go ahead with reboot.

Reboot: Stop the GWR-IUPC and reboot.


Start programs e.g. after reboot:
From a remote computer: connect with VNC. Or work hands-on with screen and keyboard. 
GWR-UIPC
should come up automatically. There are warnings (red-coloured circles and squares) to be cleared in the Main tab;
move the pointer to the red items and click the left mouse button.
 
Do you see a cygwin console window (black background, icon )?
If not, start a Cygwin Terminal (desktop icon, double-click) and from within there, start Cygwin-X by
  . sx
Start an xterm terminal from the cygwin console if one didn't come up due to sx:
  . st
That'll be your launch window. Go there.
Programs are in  /home/SGA/perlproc :  rolling.pl   and  udp-brc-file.pl . Here's how to start them:
Start two xterms: xterm & and in each
  tcsh
  source minipath
  setenv TODAYDEST holt
multiple recipients for the daily dataset can be specified, e.g. holt:barre
In the first xtrerm
  cd /home/SGA/perlproc
  source start-rolling
In the other
  cd /home/SGA/perlproc
  source start-broadcast
The predicted tides are stored in ~/perlproc/PT/


An example of the xterm windows involved in UDP broadcast

Taking down the procedures:
In the xterm window where you started the procedures, issue
cd /home/SGA/perlproc
if necessary. The following command will kill the persistent xterms by process number:

   . KX

To stop the routines individually:

     touch STOP
     touch STOPSEND



Elder:
Programs are in  /home/HGS/perlproc
The following programs on Elder are vital for the continued function of the Monitor and its associated pages on Holt:
SCG-accu-agent.pl         - Runs the UPD client and computes the tide residual, similar to SCG-html-agent on Holt.
file-monitor-agent.pl     - Maintains the integrity of the monitor data, helps Holt to mend gaps. 
tide-press-plot           - Prepares the 30-days' tides and pressure plot (in the upper-right of the monitor)
11-mems                   - Prepares the MEM-spectra for the last hour (plot in the lower-right of the monitor)
Scripts for scp are in ~/bin:
scp2h                     - scp to Holt
scp2ba                    - scp to Barre
scp2LIST                  - a whole range of recipients
The script actually invoked is set through the environment, e.g.
setenv SCPPGM /home/HGS/bin/scp2h
which is also the default.

Controlled and uncontrolled reboot of Elder or Utter
Controlled from remote VNC:
Applies to Elder and Utter.
Be careful to stop all Cygwin and Windows processes which could uphold rebooting.
Else the computer will stop at the prompt whether the operator agrees to stop them.
Uncontrolled:
You will find Elder in the idle state. Login (user HGS) and start cygwin. Open an xterm (an a second one if you like), use tcsh and connect to Holt (the command is simply  holt )
Elder console: . sx
Elder console: . st
Elder 3 times: xterm &
in each xterm:
tcsh
Elder   xterm: source minipath
Elder   xterm: holt
Holt    xterm: tcsh
Holt    xterm...
ps -uhgs
You will probably find orphan processes.
SCG-html-agent - /home/hgs/perlproc/SCG-html-agent.pl - runs the UDP-client
retro-agent    - /home/hgs/let/retro-agent.pl         - Serves user requests for plotting in the past.
eqwatch        - /home/hgs/perlproc/eqwatch.pl        - checks mail inbox for earthquakes
Try to kill the first two using
STOP RCV RET
and kill eqwatch more efficiently using   kill -KILL <pid>
Klick here to continue.

Controlled reboot of Elder for e.g. Windows updates.
The OS is too old to receive Microsoft updates.

Note that Elder is not only used to run some data handling procedures but also
to connect to Holt and run the live monitor functions remotely. This is because
we don't have a Holt console, and because we need a machine that is always active.
Like Elder.

However, from time too time, Elder needs rebooting.
Do the required Windows work first, like installing updates. Then...
 cd ~/perlproc
 
STOP RFMT
Reboot.
Start cygwin, and from the console window start at least two xterms
 . sx
# starts X and an xterm
 . st
# starts an additional xterm. Repeat as often as desired. Then you can hide the cygwin console
The following assumes that you run tcsh, so issue in each xterm
 tcsh
 source minipath
# this is necessary to speed up program location; the paths are extended to the software
# you need below. So this command is important!

Use one of the xterms to connect to Holt
 holt
It will open another xterm window. Start tcsh there (there is no minipath since it's a linux machine)
In the window connected to holt (frame title = hgs@holt) issue
  tcsh
  j
and note the active jobs
 cd ~/perlproc
 STOP EQW RET
In the window connected to Elder (frame title = HGS@HGS-Dator) issue
 cd ~/perlproc
 j
and note the active jobs and their job numbers
 STOP RFMT
any job with process-id (PID) n that does not stop in reasonable time can be stopped by
 ^C
to kill STOP and then
 kill %n
Do the rebooting. Start the jobs again at Elder and at Holt.
Start cygwin again. Open a couple of xterm windows with
 . sx
 . st
 . st
at the console window.
Use one of the xterms to connect to Holt. The command is simply
 holt
and start tcsh in the new xterm that will appear.
 tcsh
 cd perlproc
Done. Continue with Elder as follows
Starting the jobs at Elder
Connect to elder e.g. by ssh or
at the very machine, start cygwin and issue
 . sx
If it's not to Holt we copy, set the machine in the script /home/HGS/bin/scp2LIST
 tcsh
 source minipath
 setenv SCPPGM /home/HGS/bin/scp2LIST
 echo $path

check that path contains  ( . ~/perlproc ... ), add if necessary
 cd perlproc/SCG
Try
       sync-monitor

If that does not work,
  prune-monitor -T         # will preserve the last two hours of monitor.txt
# prune-monitor -T -b h-n  # will preserve the last n hours
In each its own xterm, start the eternally running programs
(this was once accomplished with source START, but now you have to it in separate xterms)
source start-SCG-accu-agent.csh

source start-file-monitor-agent.csh

source start-11-mems.csh
    The programs may be started in different xterms (tide-press-plot isn't a top figure; it should be replaced
    with a better plot procedure. Anyway, it would produce a new plot once per hour and thus may share
    an xterm with e.g. 11-mems.

    The ps command in Cygwin does not list the names of procedures; thus, the START and STOP procedures
    cannot easily check whether the programs are already or still active. The jobs command (shorthand: j ) returns the script calls
    only if they have been issued in the same xterm as the jobs command.

Testing:
For testing the agent processes without copying files to the http-server machine, issue
setenv SCPPGM noscp
before  source START


Swapping Elder / Utter:
Assume Utter is active, Elder is to take over.

On Elder:
Follow the advice as for "Starting the jobs". Before source start-* , prune the monitor data or reload the other agent's monitor files
source minipath
cd perlproc/SCG
prune-monitor -M
-b h-1          # prunes monitor , preserves the last one hour
prune-monitor -T                 #
prunes monitor.txt , preserves the last two hours
prune-monitor -b h-1 -x .dat     # prunes monitor.dat , preserves the last two hours
and/or, to synchronize the monitor files on Elder with the more recent ones on Utter,
cd perlproc/SCG
sync-monitor
The latter method is for a smooth take-over while Utter keeps running.
The former method is to be preferred if the active agent (Utter) has been stopped a long time ago (more than an hour for certain, more than 10 minutes perhaps).
Holt will fetch monitor.txt if it's own one has gaps. Now it's time for opening three xterms, issue tcsh, source minipath, and cd perlproc/SCG
and one of the following in each one of the xterms:
source start-SCG-accu-agent.csh

source start-file-monitor-agent.csh

source start-11-mems.csh
After issuing the start commands, wait at least ten minutes while you observe that the processes keep running. The buffer files need some new data, and a gap near the restart won't be filled before some time has passed. 11-mems needs 60 minutes before all the curves can be drawn, but ten minutes suffice for drawing the first. 
Check these files: ls -l perlproc/SCG/monitor.*
A typical result:
-rw-r--r-- 1 HGS None    29364 Apr 23 12:54 monitor.ts
-rw-rw-r-- 1 HGS None   409920 Apr 23 12:54 monitor.txt
-rw-rw-r-- 1 HGS None   416640 Apr 23 12:54 monitor.txt.tmp
-rw-rw-r-- 1 HGS None     6720 Apr 23 12:54 monitor.wrk
-rw-rw-r-- 1 HGS None 19763520 Apr 23 12:18 monitor.dat
-rw-r--r-- 1 HGS None        0 Apr 14 16:04 monitor.tmp       -
dates from the last restart
etc., older files of no particular interest

Then, log on to Holt as user hgs and
set-broker elder
Now you can stop the processes on Utter
source minipath
cd perlproc/SCG
STOP RFMT

The reverse swap should work with exchanging Elder  <-> Utter

------------------ End Swapping Elder / Utter ------------------------

Holt:
Programs are in /home/hgs/perlproc , /home/hgs/www/SCG/ and /home/hgs/let/
There are in all five programs running on holt but only two to start, eqwatch (earthquake watch) and retro-agent (and eventually a gap-filling instance of tide-press-plot and pt4brimer).
SCG-html-agent  - /home/hgs/perlproc/SCG-html-agent.pl - runs the UDP-client, automatically restarted
                                                         from cron /home/hgs/perlproc/monitor-cronstart
                                                         every 30 minutes.
tide-press-plot - /home/hgs/TD/new-tpplot P -T RADAR   - normally a cron job, but after a long break it
                                                         should be executed from the command window with
                                                         a reprocessing option. Read the instructions
                  cd
/home/hgs/TD; /home/hgs/TD/new-tpplot -h

pt4brimer       - /home/hgs/TD/pt4brimer               - computes predicted tides; a cron job.

eqwatch         - /home/hgs/perlproc/eqwatch.pl        - checks mail inbox for earthquakes
.
retro-agent     - /home/hgs/let/retro-agent.pl         - Serves user requests for plotting in the past.
It's best to start  eqwatch and retro-agent from a stationary machine like Elder in an ssh-xterm window to Holt.

Combined start:
starts anything that happens to be inactive
cd ~/perlproc
source START
Combined stop:
STOP RETRO E[QWATCH]
    The retro-agent is not involved in the real-time processing.

   [The procedures should all be moved into a new subdir SGC-Monitor (incl. PNG). Beware the hard-wired file names!]

    SCG-html-agent forks out shell processes for preparing the plots, append the data bases,...
    The central plot with the last 1-hour is produced by csh script ~/www/SCG/monitor-plot .
   
Predicted tides for Brimer  (a cron job; should be executed from the command window after a long break)
    Example:
cd ~/TD
foreach i ( `fromto 0 31` )

   # prtide4day -XD -D 2013,01,02 -A $i # with drift term
   prtide4day -X -D 2013,01,02 -A $i
end
./scp2b -N 32
   The cron job every third day of a month:
/home/hgs/TD/pt4brimer

Retrospect page

The cgi functions for the retrospect service (show earlier data) are in SCG-monitor.cgi (-try is the engineering version).
    They need an agent called retro-agent.pl  You can go back to the day before today or earlier (since the G1-files
    are produced early in the morning of the day following their completion; c.f.cron jobs).
    The dynamic page is copied from the static page (www/SCG/monitor-plot.html) except that a few features are
    replaced (plot file names); most importantly, the date for the plot request is sent to let/log .
    retro-agent.pl processes the content of log, but only if the plot file $date-monitor.png does not already exist.
    The usual content of the file ~/let/log is the command
     /home/hgs/TD/G1-plot-hour [options] -M YYYY-MM-DD HH [mm]
 cd ~/let
 retro-agent.pl &

# to stop: touch STOPRETRO
# to test: touch TESTRETRO   and perhaps   touch ONETRIPRETRO
# While STOPRETRO is removed automatically when retro-agent.pl is started,
# the other signal files must be removed manually.
   ~/let is a directory where SCG-monitor.cgi can place files. The rule with cgi scripts is (or seems to be) that shelling (`sh commands`;) is not allowed (not that simple). An easy solution to starting jobs on requests from a cgi script is to have an agent running. The agent, however, cannot change the file permissions. The owner is www-data, so is the group. The cgi script could check whether the agent has fulfilled the request, and delete the content of the request file. However, it will depend on the surfer's human behaviour whether he/she would follow up a request. It appears more robust if the agent can decide whether the request is unanswered.
    In let/retro-agent.pl the request string is analysed and the files that are going to be produced are anticipated. If these files exist already, the agent will skip the request. The disadvantage of this method is that the new file products must be named uniquely, which will become a problem if requests with many sub-options are allowed. An easy way of would be to have a cleaner that removes files with names that match a certain pattern, applying an expiry date. For instance, plots for a particular minute could be purged every day.
set today=`echo `date '+%Y-%m-%d'`
set yester=`jdc -- -A-1 $today`
ls -l /home/hgs/www/SCG/PNG/????-??-??-??-??-monitor.png | awk -v y=$yester '($8 == y){print $9}' | xargs rm -f
This could become a croned script to run every morning.
It could have an option to loop through a series of back-days.


cron jobs on Elder
# m h  dom mon dow   command
30 03 * * * /home/HGS/Seisdata/gcf-tar -X -b 1           > /home/HGS/Seisdata/gcf-tar.log 2>&1
00 08 2 * * cd /home/HGS/Tidedata/; ./cp-from-sg -X prev > /home/HGS/Tidedata/cp-from-sg.log 2>&1 

cron jobs on Holt
  (status 2016-04-23, complete documentation to be created)
# m h  dom mon dow   command
30 01 * * * /home/hgs/wx/ARC/get-oso-weather            > /home/hgs/wx/ARC/get-oso-weather.log 2>&1
55 01 * * * /home/hgs/wx/MAREO/get-oso-mareograph       > /home/hgs/wx/MAREO/get-oso-mareograph.log 2>&1
50 06 * * * /home/hgs/wx/WELL/get-well-level            > /home/hgs/wx/WELL/get-well-level.log 2>&1
50 14 * * * /home/hgs/wx/WELL/get-well-level            > /home/hgs/wx/WELL/get-well-level.log 2>&1
#56 01 * * * /home/hgs/wx/BUBBLER/get-oso-bubbler -d 1h -S 2 > /home/hgs/wx/BUBBLER/get-oso-bubbler.log 2>&1
40  * * * * /home/hgs/wx/TEMP/get-monutemps             > /home/hgs/wx/TEMP/get-monutemps.log 2>&1
41  * * * * /home/hgs/wx/TEMP/plot/plot-monutemps       > /home/hgs/wx/TEMP/plot/plot-monutemps.log 2>&1
45 07 * * * /home/hgs/SMHI/getsealevel -g r3g.txt      >> /home/hgs/SMHI/crontouch 2>&1
45 19 * * * /home/hgs/SMHI/getsealevel -g r3g.txt      >> /home/hgs/SMHI/crontouch 2>&1
25 07 * * * /home/hgs/TD/get-tide-data                  > /home/hgs/TD/logs/get-tide.log 2>&1
50 07 * * * /home/hgs/TD/tide-press-plot -DD -U         > /home/hgs/TD/logs/tide-press-plot.log 2>&1
40 07 * * * /home/hgs/TD/yesterdays-G1-plot             > /home/hgs/TD/logs/yesterdays-G1-plot.log  2>&1
05 15 * * * /home/hgs/TD/tide-press-plot -DD -UF        > /home/hgs/TD/logs/tide-press-plot.log 2>&1
07 15 * * * /home/hgs/TD/actual-memsp-vs-time           > /home/hgs/TD/logs/actual-memsp-vs-time.log 2>&1
45 23 * * * /home/hgs/TD/actual-memsp-vs-time           > /home/hgs/TD/logs/actual-memsp-vs-time.log 2>&1
30 06 1 * * /home/hgs/TD/GGP-send-data -F               > /home/hgs/TD/logs/GGP-send-data.log 2>&1
00 21 3 * * /home/hgs/TD/pt4brimer                      > /home/hgs/TD/logs/pt4brimer.log 2>&1
00 22 * * 5 /home/hgs/TD/get-atmacs -w                  > /home/hgs/TD/logs/get-atmacs-w.log 2>&1
15 22 * * * /home/hgs/SMHI/auto-tgg                     > /home/hgs/SMHI/auto-tgg.log 2>&1
00 08 2 * * /home/hgs/TD/monthly-600s -NP R             > /home/hgs/TD/logs/monthly-600s.log  2>&1
10 03 * * * /home/hgs/TD/tilt-control-monitor -pwr -PZ  > /home/hgs/TD/logs/tilt-control-monitor.log 2>&1
12 03 * * * /home/hgs/TD/tilt-control-monitor -bal -PZ >> /home/hgs/TD/logs/tilt-control-monitor.log 2>&1
10 08 * * * /home/hgs/TD/longt-gbres                    > /home/hgs/TD/logs/longt-gbres.log 2>&1
10 22 * * 5 /home/hgs/TD/ts4openend GBR-PMR-ATR-TG-TAXP > /home/hgs/TD/logs/ts4openend.log 2>&1
00 07 * * * /home/hgs/wx/WELL/plot-rr2wl-lastmonth      > /home/hgs/wx/WELL/plot-rr2wl-lastmonth.log 2>&1
35 08 * * * /home/hgs/SMHI/plot-tgg                     > /home/hgs/SMHI/plot-tgg.log 2>&1
58 23 * * * /home/hgs/Seism/USGS/ens-daily-upd          > /home/hgs/TD/logs/ens-daily-upd.log 2>&1
58 11 * * * /home/hgs/Seism/USGS/ens-daily-upd          > /home/hgs/TD/logs/ens-daily-upd.log 2>&1
#00 08 * * * /home/hgs/TD/daily-fo-pdg.sh -r -25/55/5   -e Coquimbo -A     > /home/hgs/TD/logs/daily-fo-pdg.log 2>&1
10 08 * * * /home/hgs/TD/daily-fo-pdg.sh -r AUTO -floor -25 -d -30 -FW -e last-30-days -A -remember > /home/hgs/TD/logs/daily-rolling-fo-pdg.log 2>&1
24  * * * * /home/hgs/perlproc/monitor-cronstart        > /home/hgs/TD/logs/monitor-cronstart.log 2>&1
54  * * * * /home/hgs/perlproc/monitor-cronstart        > /home/hgs/TD/logs/monitor-cronstart.log 2>&1

  /home/hgs/wx/MAREO/get-oso-mareograph includes data from the Bubble mareograph
 
get-tide-data 
copies from Brimer;
  yesterdays-G1-plot see below;
  actual-memsp-vs-time is a now!-special version of yesterdays-G1-plot's memsp-vs-time
  tide-press-plot prepares the upper-right diagram on the monitor home page.
  daily-fo-pdg.sh  prepares free-oscillation spectrograms, which can be disbanded if no big earthquake has happened during the recent 60 days. Event names and times must be defined inside the script.
   
    Procedure  yesterdays-G1-plot does a lot:
  1. ~/TD/ :: daily-resid for yesterday => d/G1_garb_yymmdd-1s.mc
  2. ~/TD/ :: hms-mc-plot for yesterday  =>  ~/www/SCG/PNG/${yesterday}-hms-plot.png
  3. updates home pages in regard of 2.
  4. ~/TD/ :: eqlookup.pl
  5. ~/TD/MEMS/ :: memsp-vs-time

MEM spectra versus time
Normally, the cron jobs   yesterdays-G1-plot  and    actual-memsp-vs-time  update the memsp-vs-time plots and catalogs.
The numerical data is kept in  ~/TD/MEMS  and the png plots are sent to  ~/www/SCG/MEMS
in ~/TD/MEMS
  $RJD:$H.msp
  $YYYY-$MM-$DD-memsp-vs-time.grd
  memsp-vs-time.grd
  memsp-vs-time.ps
in ~/www/SCG/MEMS
  2012-08-01-memsp-vs-time-tn.png 
  2012-09-01-memsp-vs-time.png
  etc.

Programs:
  ~/TD/daily-mems
  ~/TD/MEMS/memsp-vs-time
To repair missing bits, not including today (happens indeed rarely):
  cd ~/TD
  daily-mems -M MEMS [-n #ndays] #date
  daily-mems -R MEMS [-n #ndays] #date
  cd MEMS
  memsp-vs-time -D -OT ~/www/SCG/MEMS -d #YYYY-MM-01 31
or
  memsp-vs-time -S -D -OT ~/www/SCG/MEMS -d #YYYY-MM-01 31
if the files of today haven't been sent from Elder to Holt
or
  memsp-vs-time -D -OT ~/www/SCG/MEMS -A #RJD -d #YYYY-MM-01 31
if the files of the day (RJD) are not found on Elder but on Holt
Repair missing bits of today, if   memsp-vs-time -S  did not close all gaps: Handwork!
Example is for 2012-12-30. Note that we copy an incomplete G1-file from Brimer.
   cd ~/TD
   scpb -R G1121230.054
   daily-resid -B F -T garb RAW_o054/G1121230.054
Check whether erroneous files are in the way:
   ls MEMS/`jdc -di 2012-12-30`*.msp
# remove what's necessary
Either:
   daily-mems -R MEMS -n 2 2012-12-30 # for yesterday and today
or:
   daily-mems -R MEMS -n 1 2012-12-31 # for today
then recommended:
   actual-memsp-vs-time

or:
   cd MEMS
   memsp-vs-time -D -OT ~/www/SCG/MEMS -A 56291 -d 2012-12-01 31

Finally:
   cd
~/TD
   rm -f o/G1_garb_121230-1s.mc


Brimer: Problem with running jobs in xterm windows


Figure above: This is the normal look of the broadcast processes. The heartbeat marks are symbols, eventually with a coloured symbol like (/); they change angle every minute. Samples of the broadcast record are also shown. If the windows appear dead or incomplete, follow the guidelines below.


1. Take down all xterm windows. 
Use the console window, issue
jobs
and kill the XWin job, usually
kill %1
This will remove the windows. Exit the console window too.

2. Start Windows Task Manager, kill any of the following processes:
xterm, bash, tcsh, perl

3. Start Cygwin console window and three xterms
. sx
^Z           # means: <Ctrl>-Z 
bg %1
. st
. st
. st
4. Start the broadcast processes manually
In the first of the xterms, enter
tcsh
cd bctemp
/home/SGA/bin/rolling -q -M monit-f.txt -d /home/SGA/TDYEAR/RAW_o054 -P PT -a &
In the second xterm
tcsh
cd bctemp
/home/SGA/bin/udp-brc-file -q -M monit-b.txt tmp.dat &
 

 .bye