Wednesday, 6 August 2025

Some Priority internals (SCREENMSG)

I had developed a form for a client and I wanted to document it. There are reports that purport to do this but they don't always give all the information, and my improved report doesn't exist on the client's computer. So I used the WINDBI Dump capability; one can dump a form and one can dump a form's cursor; the difference between the two is that the latter only gives the SQL query that populates the form whereas the former also gives triggers.

Looking at some of this information, my eye was caught by this statement:
SELECT 192 INTO :SCREENMSG
FROM DUMMY WHERE :RETVAL = 0;

I wondered what this could mean. Could it be that :SCREENMSG is a special variable in the same way that :ERRMSG is? Storing a number in :ERRMSG will cause the corresponding error message (from the procedure) to be displayed. So presumably the above statement will cause message 192 to be displayed on the screen. But where is 192 coming from? There were similar statements but with different numbers.

Maybe, I thought, there is a table called SCREENMSG or SCREENMSGS from which the value is automagically retrieved. And so it is: the table is SCREENMSG and value 192 is (translated into English), 'Value does not exist in the table'. I looked at all the values in the table and have to admit that I could not imagine any personal use for them.

At least another internal has been explained.


Monday, 4 August 2025

Strange report bug 2

Several years ago, I wrote1 about a bug in a report: although there was a join to a specific table (STACK2), the report behaved as if this join was non-existent. When I dumped the report in WINDBI, indeed the join to STACK2 had disappeared. The fix was to add to the report another field from STACK2 (there are only two fields in this table; STACK2.ELEMENT was used to join with something, so obviously STACK2.TYPE had to be added).

Over the past few days, I've been helping another programmer write a somewhat complicated report that is supposed to show open invoices and payments for suppliers at a given date, where the prices have been converted to dollars. One of the complications of this report is that it has to use two instances of CURREGITEMS, one fixed to have its currency dollars (-2) and its date set to the parametric conversion date. The other instance is joined to the customer's currency: this allows for the conversion of a currency other than NIS or $ to be converted.

Although the join ACCOUNTS.CURRENCY = CURREGITEMS1.CURRENCY appears in the report, the report behaves as if this join does not exist when the report is run. Again, dumping the report shows that this join does not exist. I partially solved the problem by having only ACCOUNTS.CURRENCY appear as a report field, then adding = CURREGITEMS1.CURRENCY as its expression (i.e. in a son form of the report field). 

Why do I write 'partially'? Because when ACCOUNTS.CURRENCY is -1, i.e. NIS, there is no corresponding record in CURREGITEMS. Indeed, originally the join was a left join to solve the shekel problem. But how could I fake a left join when I'm using CURREGITEMS as an expression? My off-the-cuff solution was to use the following expression
= (ACCOUNTS.CURRENCY = -1 ?
ACCOUNTS.CURRENCY : CURREGITEMS.CURRENCY1)
but I'm not totally convinced that this is the correct solution. Indeed, the programmer that I was helping informed me that it worked, but that she had to add the 'distinct' flag to the report.

The original problem (see first paragraph) was solved by adding another field of STACK2. Here, other fields of CURREGITEMS1 are being used, but they're not being displayed. There is a field in CURREGITEMS called RREXCHANGE that appears to be zero all the time, so this field could be displayed but have the 'don't display if zero' option set.

Internal links
[1] 37

Thursday, 17 July 2025

Determining if the ouput of an HTML document is a printer, the screen or email

Due to legal requirements, delivery notes and invoices have a field 'Printed' that determines when the document is printed whether the word 'original' or 'copy' should appear in the output. Displaying the document on the screen always results in 'copy'. A client wanted a similar mechanism but for customer orders - they want to know if  the order has been printed at any stage.

In an HTML document, the first three stages are usually INPUT, HTMLCURSOR and SQLI. In that third stage, the code for determing which text should be added ('original'/'copy') can be seen, but there is nothing in the document that says whether the document is being printed or displayed. Looking though the SDK, there's nothing explicit on this topic, although there is a hint.

Saving a Certified Copy when Printing a Document

You can define a procedure so that, when the document is printed or sent by e-mail (:SENDOPTION = 'PRINT' or 'AMAIL'), a certified copy is saved that can be printed later (using the Create Certified Copy program).

In order to save a copy:

  • Include an HTMLEXTFILES step which goes over the attached files.
  • In the first INPUT step, set the :HTMLPRINTORIG variable to 1.
  • In the SQLI step after the HTMLCURSOR step, set the :SAVECOPY variable to 1.

These variables are updated during runtime, so that if several documents are being printed together, the variables will be set for each individual document, depending on its status.

So it would seem that the undocumented system variable :SENDOPTION is the key to the client's request. I added a little code to the SQLI stage of the procedure WWWSHOWORDER that would save the value of :SENDOPTION in a table of private constants. From this I was able to ascertain that when printing, the value is 'PRINT' and when displaying on the screen, it's 'DISPLAY'. Presumably when sending email the value is 'AMAIL' and for fax (??) it's probably 'FAX', although I don't need to know this.

For my purposes (or for the client), I have to update the private field 'PRINTED' in the ORDERS table if SENDOPTION is not equal to 'DISPLAY'. I think it better to save a username and date/time than just the fact that the order has been printed, but that's only my opinion.

I can't imagine any further use for this undocumented variable, but I vaguely recall that once I would have liked to have known its value, although for what, I do not remember.

Tuesday, 1 July 2025

Recursive FILELIST

The SDK documents under heading Browsing the Contents of a Folder (page 233 in my copy of the V23 document) how to obtain a list of files in a folder. 

:DIR = '../../tmpDir'; SELECT SQL.TMPFILE INTO :ST6 FROM DUMMY; SELECT SQL.TMPFILE INTO :MSG FROM DUMMY; EXECUTE FILELIST :DIR,:ST6,:MSG; /* In the linked file of the STACK6 table, you will find all files and folders under the input directory :DIR. */ LINK STACK6 TO :ST6; GOTO 99 WHERE :RETVAL <= 0; DECLARE NEWFILES CURSOR FOR SELECT TOLOWER(NAME) FROM STACK6 WHERE TOLOWER(NAME) LIKE ' loadorder*'; OPEN NEWFILES; ...

I've written code based on this to poll a specific directory; if a file is found, then the procedure performs some action based on this file (e.g. reads the file and uses it as input for an interface). At the end, the procedure deletes the file so that it won't be found again.

This is all well and good when Priority is hosted on a company server, but is problematic when the web interface is used and Priority is hosted 'in the cloud'. The new online SDK discusses this scenario and states that The system/sync folder is a special folder available in Priority installations on the public cloud. It provides a location, accessible by SFTP, where users can upload files from an external source. The folder behaves a bit differently than regular Priority folders. I am working with a client for whom some external company is uploading files to a subfolder of system/sync; the client wants that these files (or rather, a reference to these files) be stored as an attachment to a financial journal entry. I tried using the 'standard' FILELIST code as written above but this was not bringing me any joy.

After approaching Priority Software, an undocumented feature of FILELIST was revealed: in order to traverse system/sync, one has to perform a recursive search by means of the flag '-R'. There is also another flag whose meaning escapes me at the moment, '-d'. Most importantly, instead of linking STACK6 to the results, one links STACK_ERR. It's not clear to me why this change is required: STACK6 has the fields NUM (equivalent to STACK_ERR.LINE), TYPE (equivalent to CHARDATA) and NAME (equivalent to MESSAGE), where the length of NAME is 100 characters. But the proof is in the pudding, and a recursive search does not work with STACK6.

Here is the required code when using system/sync as written for WINDBI as a test:

SELECT SQL.TMPFILE INTO :ST6 FROM DUMMY; :CHKDIR = SYSPATH ('SYNC', 0); EXECUTE FILELIST :CHKDIR, :ST6, '-R', '-d', SQL.TMPFILE; LINK STACK_ERR TO :ST6; SELECT LINE, CHARDATA, MESSAGE FROM STACK_ERR FORMAT; UNLINK STACK_ERR;

This code does provide the names of files in the target directory.

While writing these words, I've had a problematic insight: if a procedure traverses the given folder and find files, it will attach them to journal entries. The next time the procedure runs, the same files will be found - it seems possible that one can attach the same file twice to some entity! The keys of the EXTFILE table are IV, TYPE and EXTFILENUM, and a quick test shows that indeed one can attach the same file more than once to the same entity! Obviously I will have to implement some form of check to prevent this; as opposed to the polling code that deletes the file after handling it, here the files have to remain 'forever'. No date information seems to be passed so one can't use this as a filter.

Wednesday, 18 June 2025

Converting minutes into days:hours:minutes

One of my clients wanted a report that was based on order statuses (as we know now, TODOLIST), displaying the time spent in each status in the format DD:mm:hh. The field TODOLIST.DURATION holds this time in minutes and the task is to convert it.

One might ask, what is the problem with using DTOA to convert the number? It seems that if one does not display a month, then the number of days in the current month is used. For 51936, a number that I had, the expression DTOA (51936, 'DD:hh:mm') gives the result 06:01:36, because there aren't more than 30 days in the current month (the expression is equivalent to 36 days and some change).

The form DOCTODOLISTLOG purports to give the number of days and number of hours in separate columns but this isn't what the client wants and it's also not exact (a fraction of hours is given, not the number of the minutes). The number of days is defined as (0.0 + TODOLIST.DURATION)/ 1440.0. I don't understand why the number is promoted to a real, as TODOLIST.DURATION / 1440 gives the same answer (36). But I'm not taking a chance, so I will use the same expression. As I want an integral number of days, I need to use ITOA, but this won't work on a real, so first I have to round the number, giving the expression (ITOA (ROUND((0.0 + TODOLIST.DURATION)/ 1440.0), 2). The final 2 is to ensure that the result will have at least two digits, where leading zeroes are added as necessary. If I had kept the number as a real and used RTOA, then I could have chopped off any remainder but there would be no leading zeroes.

What about the hours and minutes? The way that I did this in a report was first to define a value that is not displayed: TODOLIST.DURATION MOD 1440. This is done in order to simplify the remaining calculations. Using 51936 as input, this expression results in 96; clearly this is equivalent to 1 hour and 36 minutes, or #207 / 60 and #207 mod 60, where #207 is the number of the expression.

Thus the final string is given by the following formula
STRCAT (ITOA (ROUND((0.0 + TODOLIST.DURATION)/ 1440.0), 2), ':',
ITOA (#207 / 60, 2), ':', ITOA (#207 MOD 60, 2))

If TODOLIST.DURATION is 51936, then this monstrosity results in the string 36:01:36  (yes, the client wanted that if the duration were over 12 days then it would appear in red - but that's another story).

Monday, 16 June 2025

Showing 'info only' parts in BOMs

I spent an hour banging my head against a wall today, trying to find the correct invocation to SONRAW that would result in showing all sons, including those that are marked as 'info only'. The basic form of the command is

EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, :$.OPT, :$.REV
where: :$.PRT is a linked file of parts that one wants to 'explode'
SQL.DATE8 is of course, today - this may have some function with trees that have revisions
:$.ARC is the result, a linked file of tuples of PARTARC
:$.OPT is the desired option, the raison d'etre of this blog
:$.REV is apparently something to do with revisions ... or not.

If one dissects the various values of :$.OPT as in the PARTTREE procedure, 0 = regular, 2 = including phantoms, 4 = including info only parts. These values can be combined as they form a bit mask. The value of :$.REV had me scratching my head: if :$.OPT is 0 then :$.REV can be empty and one does get some form of tree, but it seems best that :$.REV always be set to '-N'. I wasted a great deal of time trying to find the right invocation to include info-only sons where :$.REV was empty.
  • So the invocation for a 'normal' BOM would be
    EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 0, '-N'
  • For a 'normal' BOM with phantoms would be
    EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 2, '-N'
  • And for a 'normal' BOM with info-only sons would be
    EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 4, '-N'
Cut this out and store it somewhere.

Sunday, 8 June 2025

Converting procedures and reports to English

I am in the middle of converting many procedures and reports to English for a client who is starting operations in India. This process is fairly straight-forward and boring, but there are some problems that have to be overcome.

The process can be divided into two: translating labels (of the procedure/report itself, procedural parameters, field titles in reports) and handling fields that have an English value as well as a Hebrew value. Translating labels is straight-forward; the only point worth noting is that as labels are (generally) limited to 20 characters, one should enter the translated label for language 3 (American English) first then copy the label to language 2 (British English). It seems that the check for length occurs only for language 3.

One place that requires slightly special handling is reports that have a different title to the given title. Normally, the translated title of the report goes in the 'translation of entity title' sub-form, but a specific title that can be longer goes in the 'output title' sub-form. The first would be displayed in a menu whereas the second is only for output. This sub-form has its own sub-form, 'Translation of Output Title'.

Handling fields with a separate English value is more involved. There is only one table, DOCSTATUSES, that has both STATDES and ESTATDES fields, and choosing which to display is very simple:

(SQL.LANGUAGE = 1 ? DOCSTATUSES.STATDES : DOCSTATUSES.ESTATDES)

But most other tables have the English value in a separate table, eg. CUSTOMERS.CUSTDES and CUSTOMERSA.ECUSTDES. This is slightly more involved, as one has to write both the conditional statement as well as adding a left join between CUSTOMERS and CUSTOMERSA.

The real problems start when this field is a parameter: the client has a propensity for including the part status in reports, where it is frequently a parameter. In this case, one has to add a CHOOSE-FIELD trigger for the field that in itself is very interesting as it displays how Priority manages a UNION ALL.

SELECT PARTSTATS.STATDES, '' FROM PARTSTATS WHERE PARTSTAT <> 0 AND INACTIVEFLAG <> 'Y' AND SQL.LANGUAGE = 1; SELECT /* AND STOP */ DOCSTATUSES.ESTATDES, '' FROM DOCSTATUSES WHERE DOCSTATUS <> 0 AND TYPE = '4' AND SQL.LANGUAGE > 1;

The (currently) insolvable problem is with part status being a parameter to an INPUT or SQLI stage. One can give the parameter an English title and define the above CHOOSE-FIELD trigger, but into which table is the value stored, PARTSTATS or DOCSTATUSES? One, somewhat clumsy, solution is to use different stages for the different languages, viz.

[STAGE 10] :$.GO = (SQL.LANGUAGE = 1 ? 30 : 50; [STAGE 20]: GOTO; {$.GO} [STAGE 30]; /* Hebrew */ LINK PARTSTATS TO :$.STA; /* parameter */ .... [STAGE 40] GOTO; {60} [STAGE 50] /* English */ LINK DOCSTATUSES TO :$.STD; /* parameter */ ... [STAGE 60] Report

For a moment, I thought that I could write a CHOOSE-FIELD trigger similar to the above for the procedural parameter that would choose either PARTSTATS.PARTSTAT or DOCSTATUSES.DOCSTAT, but there are two problems with this: (a) the parameter has to a character value, not numerical (and using ITOA doesn't solve the problem; (b) the form that Priority uses from which a value will be chosen is dependent on the table linked to the parameter that is in this case PARTSTATS. So it looks like I'm stuck with the GOTO solution. 

One possible improvement to this would be in stage 50 - instead of duplicating whatever is in stage 30 but using DOCSTATUSES, the appropriate values could be entered into a linked instance of PARTSTATS. Similarly stage 30 only gets values for PARTSTATS. Then there need be only one 'real' SQLI stage, 60, that does whatever is necessary for the report.

[Update] The GOTO solution won't work either. Stage 50 has the linked file DOCSTATUSES, and so the form that will be displayed or from which values will be extracted is also DOCSTATUSES. Any attempt to run this form results in the error message 'This form is reserved for internal use by the system'. To get around this problem, I defined a new form that is based on DOCSTATUSES, shows only ESTATDES and SORT and is defined as Q. In the 'continuation' sub-form of the procedural parameter, I defined the target form to be my private form.

This works! But it's still clumsy.

Monday, 19 May 2025

Program logging

I was asked this morning how often user X had run procedure Y in the past few months. I explained that Priority saves (to the best of my knowledge) such data in the form of saved reports for a few days and then I delete those reports. Of course, I had deleted the reports this morning so I couldn't give any data.

What I could do was add logging procedure usage to Priority. First I defined a table TEST_PROGUSERS:

FieldTypeWidthTitle
PROGINT13> Procedure
USERINT13> User
CURDATEDATE14Date
DETAILSRCHAR48Details

Then I built a read-only form based on this table; this form is a son form of EPROG, the procedure generator. In the procedures who usage I want to track I added the following statement

INSERT INTO TEST_PROGUSERS (PROG, USER, CURDATE, DETAILS) SELECT EXEC.EXEC, SQL.USER, SQL.DATE, STRCAT (DTOA (:$.FDT, 'DD/MM/YY'), ',', DTOA (:$.TDT, 'DD/MM/YY')) FROM EXEC WHERE EXEC.ENAME = '$' AND EXEC.TYPE = 'P';

The 'details' field is supposed to contain the values of the parameters passed to the procedure. 

Now I can see at a glance who has run a given procedure and when, without having to rely on saved reports (which even if they exist are inaccurate as someone could send the output to Excel, thus preventing the report instance being saved).

Tuesday, 13 May 2025

Debugging in the web interface

Before I get started, I have to note that debugging in the web interface is a pain, when compared to debugging with the classic interface.

In the documentation can be found the following: A common step when debugging code that includes linked temporary tables is dumping the contents of the temporary table to a file. This is used to investigate the values the system was working with at a certain point in the code. This usually follows the structure:

SELECT COLUMN1, COLUMN2... FROM LINKED_TABLE
TABS :FILENAME;

A common question when developing on Priority Web is how to access these files in a situation when there is no access to the server machine.

I won't quote the documentation further because I think that it gives a false and incomplete solution. I want to show a solution that I developed that creates debug filest then saves them as attachments to a specific customer (www) for viewing. This solution also allows the creation of several files that is useful when the procedure creating those files is run under the TTS. I'll display the complete subroutine first after which I will explain the various lines.

[1] SELECT SQL.TMPFILE INTO :INFILE FROM DUMMY; [2] SELECT * FROM HTMLCOLORS TABS :INFILE; [3] SELECT STRCAT(SYSPATH('MAIL', 1), '/TEST/$,', DTOA (SQL.DATE, 'DD-MM-YY hh-mm'), '.txt') INTO :OUTFILE FROM DUMMY; [4] EXECUTE COPYFILE :INFILE, :OUTFILE; [5] SELECT SQL.TMPFILE INTO :TEST_FILE FROM DUMMY; [6] LINK GENERALLOAD RCF TO :TEST_FILE; [7] GOTO 901 WHERE :RETVAL <= 0; [8] INSERT INTO GENERALLOAD RCF (LINE, RECORDTYPE, TEXT6) [9] VALUES (1, '1', 'www'); [10] INSERT INTO GENERALLOAD RCF (LINE, RECORDTYPE, TEXT7, TEXT3) [11] VALUES (2, '2', :OUTFILE, 'Debug'); [12] EXECUTE INTERFACE 'TEXT_ADDEXT2CUST', SQL.TMPFILE, '-L', :TEST_FILE; [13] UNLINK AND REMOVE GENERALLOAD RCF; [14] LABEL 901;

The online documentation says to create a temporary file and output the required data to this file. This is what happens in lines 1 and 2. Line 3 creates a string whose value will be ../../system/mail/test' + name of the procedure + date and time.txt. This line differs from the website and it's important to explain why: the website uses the NEWATTACH procedure to create a filename - from my tests, this file will be in an arbitrary subdirectory of ../system/mail, e.g. ../../system/mail/202402/0mjw3vv/name of file.txt. In retrospect, this doesn't matter too much for reasons that I will explain shortly. Line 4 copies the temporary file to the filename that was built in the previous line; this causes a physical file to be created whose name is stored in :OUTFILE. 

Lines 5-13 are concerned with creating a new instance of GENERALLOAD and populating it with the customer www and the file that was created in line 4. As the variable :OUTFILE is used, it doesn't really make any difference if the SYSPATH or NEWATTACH method is used; in the web interface, the user can't see the directory, so it doesn't matter if the file is in a specific directory or in an arbitrary one.

The interface TEXT_ADDEXT2CUST has two forms: CUSTOMERS, where CUSTNAME ('www') is stored in field TEXT6 and CUSTEXTFILE (in the singular!), where the name of the file is stored in TEXT7 and the description in TEXT3. When I was developing the subroutine, a strange error message about the interface appeared; it transpires that I automatically used the son form EXTFILES (in the plural) as the second form in the interface, and not CUSTEXTFILE.

Two final notes:

  1. The attachments will be displayed in reverse order of addition, i.e. the first file will be the last to be created. This is because I couldn't be bothered to use the EXTFILENUM field in the interface. I think that it's better this way as one doesn't have to scroll through a list of files in order to find the newest.
  2. Deleting a line in the attachments form will delete the physical file! Newer versions of Priority display a warning message that it is not necessarily clear. This does allow old and irrelevant files to be removed.

Tuesday, 29 April 2025

Getting a value from the last line in a table

I've often been in the situation where I want to get some value from the (currently) last line in a table. For example, if I want to get the customer number of the last order entered into the system. I would do this by means of two statements:

SELECT MAX (ORD) INTO :MORD FROM ORDERS; SELECT CUST FROM ORDERS WHERE ORD = :MORD;

In case anyone was wondering, the following is illegal syntax

SELECT CUST FROM ORDERS WHERE ORD = MAX (ORD);

But it turns out that there is a way of getting the desired value in one statement, although to be fair, the statement includes a subquery, albeit with special syntax. Below are two statements; the first works in Firebird but not in Priority SQL whereas the second works in both Priority and Firebird, although the Firebird query is very slow.

/* this gives an 'ambigous column name CUST' error in Priority */ SELECT O1.CUST FROM ORDERS O1 WHERE O1.ORD = (SELECT MAX (O2.ORD) FROM ORDERS O2); /* this works in Priority */ SELECT O1.CUST FROM ORDERS O1 WHERE O1.ORD IN (SELECT MAX (O2.ORD) FROM ORDERS O2);

Something else to be filed under 'Learn something new every day'.

Monday, 28 April 2025

Refresh parameters for TTS task

My company has 260 tasks defined to run automatically by means of the task scheduler (TTS); the vast majority of these tasks are procedures that look for specific situations (e.g. open orders that should have been supplied) and assuming that there is output, send the report by email to a user group. These procedures have their parameters defined within their code. 

One procedure - not mine - that does not have its parameters defined internally is MRP. This procedure can of course be run manually, and there are over ten parameters that the user can change, not including several system constants that generally do not change. In order to run MRP successfully by the TTS, this task has an entry in the field 'run according to the input of user'. I have always understood that an automatic invocation of MRP will run using the parameters of a given user, and that if I want to change the parameters for the automatic run, then first I would have to run MRP manually and in doing so, change the parameters to their new values. Or so I thought.

Until now, the value of the parameter 'open work orders until' has been 'the end of the next quarter'; this currently would be 30/09/25. As unusually we have some orders with a supply date in October, no work orders were being created for these lines. The production manager wanted to know the reason for this; I explained, and we agreed that the value of this parameter would be changed to 'the end of the next year', at the moment 31/12/26. I ran MRP manually, entering this value, and indeed work orders were created for those orders with an October supply date from this run. 

I was thus surprised to see that in the next automatic run of MRP, the 'open work orders until' parameter had reverted to 30/09/25. My understanding of how to change parameters for automatic invocation was flawed. It turns out that there is a simple, but rarely used, fix for this situation: one can run a procedure (by direct invocation) called TTSUSERINPUT, 'Refresh Task Input by User'. It transpires that the task has its own set of parameters and these are updated from the user's values by means of this procedure. And indeed, the next automatic run of MRP after running this procedure was run according to the updated parameters.

File under  'Learn something new every day' ....


Monday, 14 April 2025

An important change in version 24 regarding the status log of orders

I have written many procedures and reports that look at an order's status log; all these procedures look at the ORDSTATUSLOG table. A client wrote to me a few days ago and told me that one of these procedures (that is run automatically by the TTS) had stopped working. I checked this table and saw that the last entry was almost one month ago, but that changes in order status were still being reported. My unofficial conclusion was that I should no longer reference the ORDSTATUSLOG table but rather TODOLIST.

This table holds data from many different tables, so accessing the required data is slightly more complicated than what used to be. Here is an example of the queries that I used to write in order to obtain all the orders that were approved 'yesterday'

SELECT ORDERS.ORDNAME FROM ORDERS, ORDSTATUSLOG WHERE ORDERS.ORD = ORDSTATUSLOG.ORD AND ORDSTATUSLOG.ORDSTATUS = -2 AND ORDSTATUSLOG.UDATE BETWEEN SQL.DATE8 - 24:00 AND SQL.DATE8 -1 ...

This now becomes

SELECT ORDERS.ORDNAME FROM ORDERS, TODOLIST WHERE ORDERS.ORD = TODOLIST.IV AND TODOLIST.TYPE = 'O' AND TODOLIST.DOCSTATUS = 135 AND TODOLIST..UDATE BETWEEN SQL.DATE8 - 24:00 AND SQL.DATE8 -1 ...

In belated recognition, the client sent me an extract from the Version 24 release notes that sort-of documented this change: they note that there will be no new tuples in ORDSTATUSLOG but don't say where the log will continue to be maintained, presumably because status changes had always been logged in TODOLIST.

Thursday, 10 April 2025

Writing multi-environment procedures

I have been tasked a few times to write a procedure that iterates over all the active environments and saves data in a special table that is going to be accessed by an API. There are some special gotchas that need to be overcome; the following will not work

DELETE FROM MYTABLE; SELECT SQL.ENV INTO :HOME FROM DUMMY; :LINE = 0; DECLARE C1 CURSOR FOR SELECT DNAME FROM ENVIRONMENTA WHERE ACTIVE = 'Y' AND DNAME <> ''; OPEN C1; GOTO 300 WHERE :RETVAL <= 0; LABEL 100; FETCH C1 INTO :NAME; GOTO 200 WHERE :RETVAL <= 0; ENV :NAME; INSERT INTO MYTABLE (.................) ; LOOP 100; LABEL 200; CLOSE C1; LABEL 300; ENV :HOME;

Presumably there is a form based on table MYTABLE; initially all the data is wiped, then the procedure iterates through the environments and saves whatever data need to be saved. At the end, the procedure returns to the initial environment (:HOME) and presumably the data is displayed in the appropriate screen. Unfortunately, as I noted before the code, this doesn't work: the MYTABLE table will have different instances in each environment, or in other words, MYTABLE in environment A is not the same as MYTABLE in environment B. The following will work

SELECT SQL.TMPFILE INTO :ROL FROM DUMMY; LINK MYTABLE TO :ROL; SELECT SQL.ENV INTO :HOME FROM DUMMY; ... LABEL 300; ENV :HOME; DELETE FROM MYTABLE ORIG; INSERT INTO MYTABLE ORIG SELECT * FROM MYTABLE; /* the linked table */ UNLINK AND REMOVE MYTABLE;

Monday, 31 March 2025

Importing problematic data from an external source into Priority

I was recently faced with the challenge of writing a procedure that would import data from an external source and then build an "over the counter invoice" from that data. The procedure has several stages, but basically it involves the following:

  1. Copying a source file from wherever it is to a given filename in ../../system/load where the 'given filename' is the name of an interface
  2. Executing a table interface whose name is according to stage 1. This interface places the data into the table LOADORDERS for the sake of convenience.
  3. Optional: visually checking the data in the LOADORDERS form to ensure that they are correct (for example, Hebrew is written right to left, quantities). This is necessary only in the development stage.
  4. Reading the data from LOADORDERS into GENERALLOAD then executing a form interface on this data in order to create the invoice.
I've done this several times before so it doesn't faze me. But when I started running tests on the procedure, all kinds of error messages starting appearing, mainly concerned with the data that was entered into the invoice's lines. Some of these errors are standard (e.g. insufficient inventory) and some appear to be local to the client. After asking for guidance, I was told to try and ignore all of these errors and so a invoice would be created with lines that may be problematic.

Part of the documentation for flags concerning interface reads: Ignore Warnings — Priority forms generate two types of messages: errors and warnings. Leave this column blank if you want the INTERFACE program to treat warning messages as errors. To ignore warning messages, flag this column. Note: The same purpose is served by the –w parameter, which can be included during the form load [emphases mine].

So one can ignore warning messages, but there is no option to ignore error messages. But of course, the client was asking me to make the procedure/interface ignore those messages. It turns out that there is a badly documented (if at all) method for doing this: enter the :FORM_INTERFACE and :FORM_INTERFACE_NAME standard variables. One can add or update a trigger in the target form (in this case, EINVOICEITEMS) that will skip a check if the trigger is called by any interface or by a specific interface only.

GOTO 1 WHERE :FORM_INTERFACE = 1; ERRMSG 1 WHERE [some check is true] LABEL 1; GOTO 2 WHERE :FORM_INTERFACE_NAME = 'TEST_TEST'; ERRMSG 2 WHERE [some other check is true] LABEL 2;

In other words, assuming that the above is part of a PRE-INSERT trigger, the first check will be skipped if the data is being entered by any interface, whereas the second check will be skipped only if the interface name is TEST_TEST. This is fine where the form is privately developed, but problematic if the form is standard; whilst it is possible to alter the PRE-INSERT trigger by adding code to skip over check, such a change is liable to overwritten when the Priority installation is upgraded.

So what can one do to skip over the tests in a standard PRE-INSERT trigger? There is a very simple solution: first, one creates a trigger that will be executed before the PRE-INSERT trigger, e.g. AAAA_PRE-INSERT, then this trigger is defined with the following line:

END WHERE :FORM_INTERFACE_NAME = 'TEST_TEST';

My simple test showed that the standard PRE-INSERT trigger was skipped, although the internal PRE-INSERT trigger was executed - this enters the data into the database.

Obviously I have to check this solution on the client's site but all the signs are positive.
-Update
: well, it didn't work as I expected on the client's site, but fortunately the primary error that I was trying to avoid was in a private trigger, so I used the first technique of bracketing the check (actually a call to a private buffer on a different form) with the 'goto 310325 where :form_interface_name = 'test_test' statement and the label 310325.

Wednesday, 5 March 2025

Followup procedures

A few times I have had the need to perform some action after closing an invoice (sometimes customer invoices, sometimes supplier invoices); this has sometimes been updating an invoice that was opened in another company as a customer invoice and transferred to the current company as a supplier invoice. One quickly comes to the conclusion that this is not possible to do via a POST-UPDATE trigger on the invoice as the closure is performed by a separate procedure and the actual form does not create any event that can be handled.

The gurus at Priority Software were aware of this problem and so added a solution that unfortunately is barely known and certainly not documented. If one goes to the Financials > Maintenance of Financials > Basic Data > Financial Attributes > Financial Documents menu option (form name IVTYPES), two columns can be seen: Initial Procedure and Follow-up Procedure. The help text for the followup procedure appears below.

But of course, there is no documentation that might explain how such a procedure can be defined. Obviously an invoice number has to be passed in a linked file, but what is the parameter name for that file? Is it :$.IV - the CLOSEYIV procedure that closes a supplier invoice has the parameter defined as :$.IV - or is it :$.PAR? 

There's only one way to find out and that's by trial and error. It turns out that the parameter should be called :$.PAR. Further testing showed that it's best to extract the IV field from the linked table after which the linked table should be closed, and any futher access be to the unlinked INVOICES table. This is probably because the linked table contains stale data, primarily the new number of the invoice that it receives after having been closed. Following is a very simple sample procedure that writes the new invoice number to a private table, simply to check that the value is being obtained.

LINK INVOICES TO :$.PAR; :IV = 0; SELECT IV INTO :IV FROM INVOICES WHERE IV > 0; UNLINK INVOICES; SELECT IVNUM INTO :PAR2 FROM INVOICES WHERE IV = :IV; UPDATE TEST_CONST SET VALUE = :IV, CHARVALUE = :PAR2 WHERE NAME = 'AB';

A problem with this kind of procedure is that debugging by means of inserting WRNMSGs throughout the procedure doesn't work, or more accurately, the warning messages don't get displayed. That's the reason that I had to update fields in a table.

VERY IMPORTANT: the followup procedure has to appear on a menu such that the person who closes the invoice will have permission to run the procedure.

Tuesday, 21 January 2025

Designing an Excel template for a report that does not have data in every row

One of my fellow workers wanted that certain columns in a report would not be displayed. I told him that the HTML template does not display these columns, which is when I realised that he is probably exporting the data to Excel. To the best of my knowledge, Excel does not honour designed templates and shows all the columns. The answer was to create an Excel template that hid the required columns. 

I have never (knowingly) done this before so I wasn't sure how to start. I right clicked on the appropriate menu option and chose 'design Excel template'; I indicated that I was creating a new template. I then filled in the parameters for the report and after a while Excel displayed the data. I deleted the appropriate columns in the spreadsheet then saved it. When I tried to create the report again using this template, the unwanted columns still appeared.

After receiving some help, I was told to add a new worksheet then copy the appropriate data from the 'datasheet' worksheet. I thought that I would be clever, so I added this formula to cell A1:

=datasheet!A:L

Whilst this effectively copied the appropriate data (I added the formula =datasheet!P:V to cell M1), it also copied over a million empty lines! In other words, I had to find a formula for copying the contents of a column until an empty cell is reached. I reached for my trusty AI and received this formula

=FILTER(DataSheet!A:A, (DataSheet!A:A <> ""))

I then copied this formula to the appropriate column headers ensuring that I was accessing the appropriate column in the datasheet (i.e. skipping over three columns). This seemed to be correct, but unfortunately was not, as some columns have empty cells in rows where other columns do have values, and so the data was 'non-aligned'. I knew that columns A-C would have data in every row, so I asked the AI program how I could use these columns as a reference. Below is the final formula, for column D, that shows how each column can use column A (the dollar sign means "don't change the column when the formula is copied to another column). Again, the formula for column M (the column that shows data taken from column P) simply subtitutes P for D in the formula.

=FILTER(DataSheet!D:D, (DataSheet!$A:$A <> ""))

In my humble opinion, this is sufficiently non-intuitive to warrant documenting it here.

Monday, 20 January 2025

String length bug found

In a procedure that sends email, I had the following expression
:EMAIL = (:$.DBG = 1 ? 'noamn@testing.com' : :EMAIL);

The value of :EMAIL prior to this expression was sigall@somecompany.co.il - the length of this string is 24 characters, whereas the length of noamn@testing.com is 17 characters. Regardless of the value of :$.DBG, the resulting string would have a length of 17 characters, and so if :$.DBG = 0, there would be an attempt to send an email to the non-existing address sigall@somecompan; of course, this would fail.

As always, it took me quite some time to find out why the email address in the letter was being truncated but took only a minute to fix. If my address is assigned to a variable prior to evaluation, then the resulting value will have the correct length.

:EMAIL = 'sigall@somecompany.co.il'; :NOAM = 'noamn@testing.com'; :EMAIL = (:$.DBG = 1 ? :NOAM : :EMAIL);

If :$.DBG = 0 then :EMAIL = 'sigall@somecompany.co.il'. If :$.DBG = 1 then :EMAIL = 'noamn@testing.com'.