Notes on UniVerse Triggers
- Triggers were initially introduced to UniVerse as part of its SQL capabilities.
- Since UV 10.0, triggers have been available for normal UniVerse files. However, this has been done by adapting the SQL triggers to the multi-value environment, which imposes a number of SQL restrictions on their use.
- "Normal" means hashed files. Triggers are not available for directory files (type 1 or 19).
- Not all file actions invoke the trigger. Notably, the CLEAR.FILE command will not invoke the trigger (the file is simply shortened at the O/S level, thus does not process the individual records).
- Curiously, I found that CNAME does not invoke the trigger either, although you would expect this to involve both a delete and a write.
- If you have multi-files, then the trigger cannot be globally applied to all the files in the multi-file. Nor can you use a Q-pointer using a filename like: dictname,filepart as the comma is invalid syntax for SQL.
- To apply triggers to multi-files, create a VOC reference like:
F
path to filepart
path to dict
- For example:
BSTEST2
F
BSTEST\BSTEST2
D_BSTEST
- Then apply the trigger to the new VOC reference.
- To apply triggers to multi-files, create a VOC reference like:
- The trigger subroutine must be either normally or globally catalogued. Locally catalogued subroutines (as per the normal PICK practice) cannot be used. However, local subroutines may be called by the trigger subroutine.
- If the trigger subroutine changes, you must DROP the trigger from the file, and then CREATE it again.
- To globally (or normally) catalog a program in a PICK flavour account:
- Copy the CATALOG command from the UV account
- Save it into your PICK account, naming it ICATALOG
- Usage: ICATALOG filename catalog-name item-name
- e.g. ICATALOG BP.Q !QUV.TRIGGER QUV.TRIGGER
- This example globally catalogs the subroutine QUV.TRIGGER as !QUV.TRIGGER.
- Note: ICATALOG is included in the VOC of all UniVerse accounts from version 11.0 onwards.
- The syntax to apply a trigger is:
- CREATE TRIGGER triggername {BEFORE | AFTER} event [OR event] ON filename FOR EACH ROW CALLING subroutine ;
- e.g. CREATE TRIGGER BEFORE_UPDATE BEFORE INSERT OR UPDATE OR DELETE ON BSTEST2 FOR EACH ROW CALLING !QUV.TRIGGER ;
- The syntax to drop a trigger is:
- DROP TRIGGER filename {triggername | ALL}
- e.g. DROP TRIGGER BSTEST2 ALL
- The practical way to handle triggers is to create ONE globally catalogued trigger subroutine that handles all trigger requests. This then calls locally catalogued subroutines to carry out the trigger functions for individual files. This may look like:
SUBROUTINE QUV.TRIGGER(triggername, schema, filename, event, timing, newid, newrec, oldid, oldrec, assoc, assoc.event, trigcnt, chain.cascade, cascade) ************************************************************************ * Bp.Q Quv.Trigger - Master subroutine for UV Triggers. * * Author : BSS * Created: 17 Jun 2014 * Updated: 24 Jun 2014 * Version: 0.0.1 * $INCLUDE Q.INCLUDES QB.COMMON.H * * Triggers apply to individual files, so we can't specify a trigger * to cover all files in a multi-file structure. We have to create a * file pointer to each of the individual files and then apply the * trigger to that file pointer. What we want to here is catch those * instances and apply a global trigger for all the multi-files. * CONVERT '/\' TO DS:DS IN filename ;* DS = path delimiter IF INDEX(filename, DS, 1) THEN dictname = FIELD(filename, DS, DCOUNT(filename, DS) - 1) triggersub = dictname:'.TRIGGER' END ELSE triggersub = filename:'.TRIGGER' END CALL Q.CATALOGUED(triggersub, iscatalogued) IF NOT(iscatalogued) THEN RETURN mode = timing:' ':event CALL @triggersub(filename, mode, newid, newrec, oldid, oldrec) RETURN * * ---------------------------------------------------------------------- * * END
- The advantage of doing this is that you do not have to DROP and CREATE triggers every time your trigger logic changes. Nor do you have to mess around with updating the system catalog with revisions to trigger subroutines. The real trigger logic is contained in normal subroutines which can be added/removed/amended at any time (subject to normal considerations of the impact of these changes on the system).
- The "mode" in the above subroutine will be:
- {BEFORE | AFTER} {INSERT | UPDATE | DELETE}
- i.e. this allows six combinations of timing and event.
- A key task of triggers is to disallow file update actions. This is achieved in UniVerse by adding a line like:
- dummy = SetDiagnostics("File is locked. Update not allowed")
- The best documentation for SetDiagnostics appears in the "BASIC SQL Client Interface Guide"
- Testing the above with a programmed write results in:
Program "TEST": Line 9, File is locked. Update not allowed Program "TEST": Line 9, Error performing trigger for "BSTEST\BSTEST2". Write failed: Status: -4
The code used in this write was:
WRITE rec ON bs2, id ON ERROR CRT 'Error' END ELSE CRT 'Write failed: Status: ':STATUS() END
This indicates that the WRITE took the ELSE clause after the trigger failure rather than the ON ERROR clause.
There doesn't appear to be a good way of suppressing the error lines that appear on the screen. You can wrap the WRITE block in HUSH statements, but you would want to check the STATUS variable to determine if any errors occurred.
I was expecting another error here because the above code did not obtain an exclusive lock before writing the item. However, it turns out that the triggers only enforce locking within TRANSACTIONS. This behaviour is controlled by the ISOMODE configuration parameter (can be modified within the UniVerse Extensible Admin tool). If ISOMODE were set to 2, then all writes and deletes would require exclusive locking of the records.