Showing posts with label procedures. Show all posts
Showing posts with label procedures. Show all posts

Tuesday, 1 July 2025

Recursive FILELIST

The SDK documents under heading Browsing the Contents of a Folder (page 233 in my copy of the V23 document) how to obtain a list of files in a folder. 

:DIR = '../../tmpDir'; SELECT SQL.TMPFILE INTO :ST6 FROM DUMMY; SELECT SQL.TMPFILE INTO :MSG FROM DUMMY; EXECUTE FILELIST :DIR,:ST6,:MSG; /* In the linked file of the STACK6 table, you will find all files and folders under the input directory :DIR. */ LINK STACK6 TO :ST6; GOTO 99 WHERE :RETVAL <= 0; DECLARE NEWFILES CURSOR FOR SELECT TOLOWER(NAME) FROM STACK6 WHERE TOLOWER(NAME) LIKE ' loadorder*'; OPEN NEWFILES; ...

I've written code based on this to poll a specific directory; if a file is found, then the procedure performs some action based on this file (e.g. reads the file and uses it as input for an interface). At the end, the procedure deletes the file so that it won't be found again.

This is all well and good when Priority is hosted on a company server, but is problematic when the web interface is used and Priority is hosted 'in the cloud'. The new online SDK discusses this scenario and states that The system/sync folder is a special folder available in Priority installations on the public cloud. It provides a location, accessible by SFTP, where users can upload files from an external source. The folder behaves a bit differently than regular Priority folders. I am working with a client for whom some external company is uploading files to a subfolder of system/sync; the client wants that these files (or rather, a reference to these files) be stored as an attachment to a financial journal entry. I tried using the 'standard' FILELIST code as written above but this was not bringing me any joy.

After approaching Priority Software, an undocumented feature of FILELIST was revealed: in order to traverse system/sync, one has to perform a recursive search by means of the flag '-R'. There is also another flag whose meaning escapes me at the moment, '-d'. Most importantly, instead of linking STACK6 to the results, one links STACK_ERR. It's not clear to me why this change is required: STACK6 has the fields NUM (equivalent to STACK_ERR.LINE), TYPE (equivalent to CHARDATA) and NAME (equivalent to MESSAGE), where the length of NAME is 100 characters. But the proof is in the pudding, and a recursive search does not work with STACK6.

Here is the required code when using system/sync as written for WINDBI as a test:

SELECT SQL.TMPFILE INTO :ST6 FROM DUMMY; :CHKDIR = SYSPATH ('SYNC', 0); EXECUTE FILELIST :CHKDIR, :ST6, '-R', '-d', SQL.TMPFILE; LINK STACK_ERR TO :ST6; SELECT LINE, CHARDATA, MESSAGE FROM STACK_ERR FORMAT; UNLINK STACK_ERR;

This code does provide the names of files in the target directory.

While writing these words, I've had a problematic insight: if a procedure traverses the given folder and find files, it will attach them to journal entries. The next time the procedure runs, the same files will be found - it seems possible that one can attach the same file twice to some entity! The keys of the EXTFILE table are IV, TYPE and EXTFILENUM, and a quick test shows that indeed one can attach the same file more than once to the same entity! Obviously I will have to implement some form of check to prevent this; as opposed to the polling code that deletes the file after handling it, here the files have to remain 'forever'. No date information seems to be passed so one can't use this as a filter.

Sunday, 8 June 2025

Converting procedures and reports to English

I am in the middle of converting many procedures and reports to English for a client who is starting operations in India. This process is fairly straight-forward and boring, but there are some problems that have to be overcome.

The process can be divided into two: translating labels (of the procedure/report itself, procedural parameters, field titles in reports) and handling fields that have an English value as well as a Hebrew value. Translating labels is straight-forward; the only point worth noting is that as labels are (generally) limited to 20 characters, one should enter the translated label for language 3 (American English) first then copy the label to language 2 (British English). It seems that the check for length occurs only for language 3.

One place that requires slightly special handling is reports that have a different title to the given title. Normally, the translated title of the report goes in the 'translation of entity title' sub-form, but a specific title that can be longer goes in the 'output title' sub-form. The first would be displayed in a menu whereas the second is only for output. This sub-form has its own sub-form, 'Translation of Output Title'.

Handling fields with a separate English value is more involved. There is only one table, DOCSTATUSES, that has both STATDES and ESTATDES fields, and choosing which to display is very simple:

(SQL.LANGUAGE = 1 ? DOCSTATUSES.STATDES : DOCSTATUSES.ESTATDES)

But most other tables have the English value in a separate table, eg. CUSTOMERS.CUSTDES and CUSTOMERSA.ECUSTDES. This is slightly more involved, as one has to write both the conditional statement as well as adding a left join between CUSTOMERS and CUSTOMERSA.

The real problems start when this field is a parameter: the client has a propensity for including the part status in reports, where it is frequently a parameter. In this case, one has to add a CHOOSE-FIELD trigger for the field that in itself is very interesting as it displays how Priority manages a UNION ALL.

SELECT PARTSTATS.STATDES, '' FROM PARTSTATS WHERE PARTSTAT <> 0 AND INACTIVEFLAG <> 'Y' AND SQL.LANGUAGE = 1; SELECT /* AND STOP */ DOCSTATUSES.ESTATDES, '' FROM DOCSTATUSES WHERE DOCSTATUS <> 0 AND TYPE = '4' AND SQL.LANGUAGE > 1;

The (currently) insolvable problem is with part status being a parameter to an INPUT or SQLI stage. One can give the parameter an English title and define the above CHOOSE-FIELD trigger, but into which table is the value stored, PARTSTATS or DOCSTATUSES? One, somewhat clumsy, solution is to use different stages for the different languages, viz.

[STAGE 10] :$.GO = (SQL.LANGUAGE = 1 ? 30 : 50; [STAGE 20]: GOTO; {$.GO} [STAGE 30]; /* Hebrew */ LINK PARTSTATS TO :$.STA; /* parameter */ .... [STAGE 40] GOTO; {60} [STAGE 50] /* English */ LINK DOCSTATUSES TO :$.STD; /* parameter */ ... [STAGE 60] Report

For a moment, I thought that I could write a CHOOSE-FIELD trigger similar to the above for the procedural parameter that would choose either PARTSTATS.PARTSTAT or DOCSTATUSES.DOCSTAT, but there are two problems with this: (a) the parameter has to a character value, not numerical (and using ITOA doesn't solve the problem; (b) the form that Priority uses from which a value will be chosen is dependent on the table linked to the parameter that is in this case PARTSTATS. So it looks like I'm stuck with the GOTO solution. 

One possible improvement to this would be in stage 50 - instead of duplicating whatever is in stage 30 but using DOCSTATUSES, the appropriate values could be entered into a linked instance of PARTSTATS. Similarly stage 30 only gets values for PARTSTATS. Then there need be only one 'real' SQLI stage, 60, that does whatever is necessary for the report.

[Update] The GOTO solution won't work either. Stage 50 has the linked file DOCSTATUSES, and so the form that will be displayed or from which values will be extracted is also DOCSTATUSES. Any attempt to run this form results in the error message 'This form is reserved for internal use by the system'. To get around this problem, I defined a new form that is based on DOCSTATUSES, shows only ESTATDES and SORT and is defined as Q. In the 'continuation' sub-form of the procedural parameter, I defined the target form to be my private form.

This works! But it's still clumsy.

Tuesday, 13 May 2025

Debugging in the web interface

Before I get started, I have to note that debugging in the web interface is a pain, when compared to debugging with the classic interface.

In the documentation can be found the following: A common step when debugging code that includes linked temporary tables is dumping the contents of the temporary table to a file. This is used to investigate the values the system was working with at a certain point in the code. This usually follows the structure:

SELECT COLUMN1, COLUMN2... FROM LINKED_TABLE
TABS :FILENAME;

A common question when developing on Priority Web is how to access these files in a situation when there is no access to the server machine.

I won't quote the documentation further because I think that it gives a false and incomplete solution. I want to show a solution that I developed that creates debug filest then saves them as attachments to a specific customer (www) for viewing. This solution also allows the creation of several files that is useful when the procedure creating those files is run under the TTS. I'll display the complete subroutine first after which I will explain the various lines.

[1] SELECT SQL.TMPFILE INTO :INFILE FROM DUMMY; [2] SELECT * FROM HTMLCOLORS TABS :INFILE; [3] SELECT STRCAT(SYSPATH('MAIL', 1), '/TEST/$,', DTOA (SQL.DATE, 'DD-MM-YY hh-mm'), '.txt') INTO :OUTFILE FROM DUMMY; [4] EXECUTE COPYFILE :INFILE, :OUTFILE; [5] SELECT SQL.TMPFILE INTO :TEST_FILE FROM DUMMY; [6] LINK GENERALLOAD RCF TO :TEST_FILE; [7] GOTO 901 WHERE :RETVAL <= 0; [8] INSERT INTO GENERALLOAD RCF (LINE, RECORDTYPE, TEXT6) [9] VALUES (1, '1', 'www'); [10] INSERT INTO GENERALLOAD RCF (LINE, RECORDTYPE, TEXT7, TEXT3) [11] VALUES (2, '2', :OUTFILE, 'Debug'); [12] EXECUTE INTERFACE 'TEXT_ADDEXT2CUST', SQL.TMPFILE, '-L', :TEST_FILE; [13] UNLINK AND REMOVE GENERALLOAD RCF; [14] LABEL 901;

The online documentation says to create a temporary file and output the required data to this file. This is what happens in lines 1 and 2. Line 3 creates a string whose value will be ../../system/mail/test' + name of the procedure + date and time.txt. This line differs from the website and it's important to explain why: the website uses the NEWATTACH procedure to create a filename - from my tests, this file will be in an arbitrary subdirectory of ../system/mail, e.g. ../../system/mail/202402/0mjw3vv/name of file.txt. In retrospect, this doesn't matter too much for reasons that I will explain shortly. Line 4 copies the temporary file to the filename that was built in the previous line; this causes a physical file to be created whose name is stored in :OUTFILE. 

Lines 5-13 are concerned with creating a new instance of GENERALLOAD and populating it with the customer www and the file that was created in line 4. As the variable :OUTFILE is used, it doesn't really make any difference if the SYSPATH or NEWATTACH method is used; in the web interface, the user can't see the directory, so it doesn't matter if the file is in a specific directory or in an arbitrary one.

The interface TEXT_ADDEXT2CUST has two forms: CUSTOMERS, where CUSTNAME ('www') is stored in field TEXT6 and CUSTEXTFILE (in the singular!), where the name of the file is stored in TEXT7 and the description in TEXT3. When I was developing the subroutine, a strange error message about the interface appeared; it transpires that I automatically used the son form EXTFILES (in the plural) as the second form in the interface, and not CUSTEXTFILE.

Two final notes:

  1. The attachments will be displayed in reverse order of addition, i.e. the first file will be the last to be created. This is because I couldn't be bothered to use the EXTFILENUM field in the interface. I think that it's better this way as one doesn't have to scroll through a list of files in order to find the newest.
  2. Deleting a line in the attachments form will delete the physical file! Newer versions of Priority display a warning message that it is not necessarily clear. This does allow old and irrelevant files to be removed.

Thursday, 10 April 2025

Writing multi-environment procedures

I have been tasked a few times to write a procedure that iterates over all the active environments and saves data in a special table that is going to be accessed by an API. There are some special gotchas that need to be overcome; the following will not work

DELETE FROM MYTABLE; SELECT SQL.ENV INTO :HOME FROM DUMMY; :LINE = 0; DECLARE C1 CURSOR FOR SELECT DNAME FROM ENVIRONMENTA WHERE ACTIVE = 'Y' AND DNAME <> ''; OPEN C1; GOTO 300 WHERE :RETVAL <= 0; LABEL 100; FETCH C1 INTO :NAME; GOTO 200 WHERE :RETVAL <= 0; ENV :NAME; INSERT INTO MYTABLE (.................) ; LOOP 100; LABEL 200; CLOSE C1; LABEL 300; ENV :HOME;

Presumably there is a form based on table MYTABLE; initially all the data is wiped, then the procedure iterates through the environments and saves whatever data need to be saved. At the end, the procedure returns to the initial environment (:HOME) and presumably the data is displayed in the appropriate screen. Unfortunately, as I noted before the code, this doesn't work: the MYTABLE table will have different instances in each environment, or in other words, MYTABLE in environment A is not the same as MYTABLE in environment B. The following will work

SELECT SQL.TMPFILE INTO :ROL FROM DUMMY; LINK MYTABLE TO :ROL; SELECT SQL.ENV INTO :HOME FROM DUMMY; ... LABEL 300; ENV :HOME; DELETE FROM MYTABLE ORIG; INSERT INTO MYTABLE ORIG SELECT * FROM MYTABLE; /* the linked table */ UNLINK AND REMOVE MYTABLE;

Wednesday, 5 March 2025

Followup procedures

A few times I have had the need to perform some action after closing an invoice (sometimes customer invoices, sometimes supplier invoices); this has sometimes been updating an invoice that was opened in another company as a customer invoice and transferred to the current company as a supplier invoice. One quickly comes to the conclusion that this is not possible to do via a POST-UPDATE trigger on the invoice as the closure is performed by a separate procedure and the actual form does not create any event that can be handled.

The gurus at Priority Software were aware of this problem and so added a solution that unfortunately is barely known and certainly not documented. If one goes to the Financials > Maintenance of Financials > Basic Data > Financial Attributes > Financial Documents menu option (form name IVTYPES), two columns can be seen: Initial Procedure and Follow-up Procedure. The help text for the followup procedure appears below.

But of course, there is no documentation that might explain how such a procedure can be defined. Obviously an invoice number has to be passed in a linked file, but what is the parameter name for that file? Is it :$.IV - the CLOSEYIV procedure that closes a supplier invoice has the parameter defined as :$.IV - or is it :$.PAR? 

There's only one way to find out and that's by trial and error. It turns out that the parameter should be called :$.PAR. Further testing showed that it's best to extract the IV field from the linked table after which the linked table should be closed, and any futher access be to the unlinked INVOICES table. This is probably because the linked table contains stale data, primarily the new number of the invoice that it receives after having been closed. Following is a very simple sample procedure that writes the new invoice number to a private table, simply to check that the value is being obtained.

LINK INVOICES TO :$.PAR; :IV = 0; SELECT IV INTO :IV FROM INVOICES WHERE IV > 0; UNLINK INVOICES; SELECT IVNUM INTO :PAR2 FROM INVOICES WHERE IV = :IV; UPDATE TEST_CONST SET VALUE = :IV, CHARVALUE = :PAR2 WHERE NAME = 'AB';

A problem with this kind of procedure is that debugging by means of inserting WRNMSGs throughout the procedure doesn't work, or more accurately, the warning messages don't get displayed. That's the reason that I had to update fields in a table.

VERY IMPORTANT: the followup procedure has to appear on a menu such that the person who closes the invoice will have permission to run the procedure.

Wednesday, 20 November 2024

Using NFILE prevents complications

Someone sent me some code written by a third party that inserts data into the table LABELS, naturally for printing labels. The person who sent me the code wanted the sort order of the labels to be changed so that it would be according the part numbers sorted alphabetically. As the original programmer used the INSERT INTO/SELECT FROM syntax, it wasn't possible to simply add 'ORDER BY PART.PARTNAME' at the end of the query. 

The solution was provided by what seemed to be a somewhat pointless subroutine earlier in the program.

SUB 20; INSERT INTO STACK4 (KEY) SELECT PART FROM PART WHERE PART <> 0; RETURN;

Saving the parts in this table gave me the opportunity to add a sort order to STACK4 as per the following.

:SORDER = 0; DECLARE CSUB50 CURSOR FOR SELECT PART, PARTNAME FROM PART WHERE PART > 0 ORDER BY PARTNAME; OPEN CSUB50; GOTO 530 WHERE :RETVAL <= 0; LABEL 510; FETCH CSUB50 INTO :PART, :PNAME; GOTO 520 WHERE :RETVAL <= 0; :SORDER = :SORDER + 1; UPDATE STACK4 SET INTDATA = :SORDER WHERE KEY = :PART; LOOP 510; LABEL 520; CLOSE CSUB50; LABEL 530;

Then I could make use of STACK4.INTDATA in the statement that inserts the data into LABELS, ensuring that the labels would indeed be sorted by partname. Later on I noticed that the programmer had not used the field LABELS.SORT - STACK4.INTDATA can be inserted into this field in order to ensure the correct sort order [see below *].

Reviewing the code. I saw that the programmer had declared a subroutine 30 that on first glance appeared to be the same as subroutine 20; the only difference was that instead of FROM PART, the second subroutine had FROM PART ORIG. In other words, using the unlinked PART table instead of the linked table. How was the programmer checking whether PART was linked? By the following code

:E_COUNT = 0; SELECT COUNT(*) INTO :E_COUNT FROM PART WHERE PARTNAME NOT IN ('', :E_CHVAL) ; :E_ORIGCOUNT = 0; SELECT COUNT(*) INTO :E_ORIGCOUNT FROM PART ORIGPART WHERE PARTNAME NOT IN ('', :E_CHVAL) ; :E_LINKPART = ( :E_COUNT <> :E_ORIGCOUNT AND :E_COUNT <> 0 ? 'Y' : '\0' )

Either SUB 20 or SUB 30 would be invoked, depending on the value of the variable :E_LINKPART . Although I haven't seen the definitions of the various parameters,I am sure that the parameter PAR (implying that this code is called from another procedure) is defined as FILE. If it were defined as NFILE, then there is no need for :E_LINKPART and only one subroutine would be needed.

LINK PART TO :$.PAR; ERRMSG 500 WHERE :RETVAL <= 0; GOTO 1 FROM PART WHERE PART > 0; UNLINK PART; LABEL 1; ... GOSUB 20;

I realise that I've made a mountain out of a molehill, but it seems that the original programmer had to build a baroque solution to solve a problem that has a very simple solution. This is not the first time I've seen this, and I think that it stems from an incomplete understanding of how to program - not necessarily how to program in Priority.

Even without using NFILE, the programmer still could have saved the SUB 20/30 duplication:

:E_COUNT = :E_ORIGCOUNT = 0; SELECT COUNT (*) INTO :E_COUNT FROM PART; SELECT COUNT (*) INTO :E_ORIGCOUNT FROM PART ORIG; GOTO 1 WHERE :E_COUNT <> :E_ORIGCOUNT; UNLINK PART; LABEL 1;

After all, there's no point in the clause WHERE PARTNAME NOT IN ('', :E_CHVAL) if the same clause is used in both queries; at worst, both values might be one too high (e.g. 37 instead of 36) but as equality is being checked, it makes no difference. Incidentally, :E_CHVAL appears not to be defined anywhere in the procedure; another rookie mistake (it turns out that :E_CHVAL was defined in a previous stage of the procedure).

* To show that I too am not immune to writing sub-optimal code, the code that enters data into STACK4 could be rewritten as below, without the need of a prior INSERT INTO statement. There's no real need to use a subroutine at all as this code is called only once, although I understand why the programmer did this: using subroutines gives the procedure a sense of structure.

SUB 20; /* Insert data into stack4 in partname order */ DECLARE CSUB20 CURSOR FOR SELECT PART, PARTNAME, SQL.LINE FROM PART WHERE PART > 0 ORDER BY PARTNAME; OPEN CSUB20; GOTO 230 WHERE :RETVAL <= 0; LABEL 210; FETCH CSUB20 INTO :PART, :PNAME, :LINE; GOTO 220 WHERE :RETVAL <= 0; INSERT INTO STACK4 (KEY, INTDATA) VALUES (:PART, :LINE); LOOP 210; LABEL 220; CLOSE CSUB20; LABEL 230; RETURN;

Sunday, 25 August 2024

Is this the lamest code I've ever seen?

In the course of debugging a procedure that someone else had written years ago for a customer, I came across the following code:

DECLARE UPD_DATA9 CURSOR FOR SELECT DTOA (:$.DAT, 'DD/MM/YY') FROM DUMMY; OPEN UPD_DATA9; LABEL 10; FETCH UPD_DATA9 INTO :IVDATE; GOTO 100 WHERE :RETVAL <= 0; UPDATE TEST_INVLOADDIV SET TEST_IVDATE = ATOD (:IVDATE, 'DD/MM/YY'); LOOP 10; LABEL 100; CLOSE UPD_DATA9;

This code simply updates the field 'TEST_IVDATE' in all the rows of the table 'TEST_INVLOADDIV' with the date :$.DAT. This must be the lamest code that I've ever seen as there are so many stylistic errors. That is not to say that this code won't achieve what it's supposed to but it is so wrong!

Where should I start? The global parameter :$.DAT is guaranteed to be a date so there's no need to turn it into a string in the second line, then turn it back into a date in the eighth line. There is absolutely no reason to use a cursor that iterates over ... a global parameter! Totally pointless. The above code was written as a separate SQLI stage in a procedure; there's no real reason why it couldn't have been included in another stage with more (bad) code. The above code can be written succinctly as

UPDATE TEST_INVLOADDIV SET TEST_IVDATE = :$.DAT;

Of course, if the programmer were being paid by lines written, then the original version is much better, having 11 lines. My streamlined version wouldn't earn the programmer very much, having only one line. But if the programmer is being paid by her level of sophistication ....

Friday, 16 August 2024

More fixing a complicated problem with BOM revisions and 'info only' parts

The code given in the previous blog was almost correct, but subtly wrong. I had ignored the fact that each son part would appear three times, and that the first instance would set SONACT to be -1 thus causing the son not to appear in the report, even though there would be a correct instance later on. A more correct version of this code is

SUB 820; :OK = 0; /* the query below may fail */ :RVDATE = 01/01/88; SELECT MAX (ORIG.RVTILLDATE) INTO :RVDATE FROM PARTARC ORIG WHERE ORIG.PART = :PARENT AND ORIG.SON = :SON AND ORIG.INFOONLY <> 'Y'; :OK = (:RVDATE = 01/01/50 ? 1 : 0); GOTO 821 WHERE :OK = 1; UPDATE PARTARC /* effectively remove the part from the tree */ SET SONACT = -1 WHERE SON = :SON; LABEL 821; RETURN;

Saving the maximum value of RVTILLDATE into a variable was not in my original code; this was intended to help with running the procedure with the debugger. But for some reason the 'wonderful' web interface didn't work so I didn't get an automatic debug output.

There was a strange phenomenon whose source I failed to track down, even after wasting an hour on it. The part that I was using for testing should have four active sons, but for some reason five were being displayed. That fifth part had been marked 'for info only' in all the versions so the 'select max' query above should have failed. But somehow this fifth part never even got to the subroutine, a failure that I could not track. Eventually I added a cursor that is called immediately after the SONRAW procedure.

DECLARE C840 CURSOR FOR SELECT SON, SONPARTREV FROM PARTARC WHERE SON > 0; OPEN C840; GOTO 843 WHERE :RETVAL <= 0; LABEL 841; FETCH C840 INTO :SON, :PARENT; GOTO 842 WHERE :RETVAL <= 0; :A = :B = 0; SELECT COUNT (*), SUM ((INFOONLY = 'Y' ? 1 : 0)) INTO :A, :B FROM PARTARC PA WHERE PART = :PARENT AND SON = :SON; LOOP 841 WHERE :A <> :B; UPDATE PARTARC /* this is the linked table */ SET SONACT = -1 WHERE SON = :SON; LOOP 841; LABEL 842; CLOSE C840; LABEL 843;

This cursor got rid of the spurious part and probably helped the rest of the procedure run a bit faster as there would have been fewer parts to check in the main cursor.

Working with BOM revisions is very complicated!

Sunday, 30 June 2024

Fun and games with the :$.PAR parameter / 2

I proudly connect to the client's client's computer, update the IVSTORNO replacement procedure ... and watch it do nothing.

It took a while to realise that I work with the classic, Windows, interface where EXECUTE WINACTIV is fine, whereas the client works with the web interface that requires EXECUTE ACTIVATE. Once that little detail was fixed, the procedure worked flawlessly.

Saturday, 29 June 2024

Fun and games with the :$.PAR parameter

A client of a client wanted, when cancelling an invoice, to add a reason for the cancellation. This would be done by creating a new procedure (actually two) that gets the reason from the user, updates the invoice then calls one of the standard procedures for cancelling invoices (IVSTORNO or IVSTORNO2). I falsely assumed that this wouldn't be much of a problem. First I defined a new table that would hold the reasons for cancellation, then I developed a screen for this table; next I added an appropriate field to the INVOICES table and added the appropriate fields to the forms AINVOICES and CINVOICES. So far so good. As the procedures that cancel invoices are called by direct invocation, they have a :$.PAR parameter that is a linked file holding the number of the invoice to be cancelled.

My original attempt at adding the reason code to the cancelling procedure went as follows

LINK INVOICES TO :$.PAR; GOTO 99 WHERE :RETVAL <= 0; :IV = 0; SELECT IV INTO :IV FROM INVOICES WHERE IV > 0; UNLINK INVOICES; GOTO 99 WHERE :IV = 0; UPDATE INVOICES SET NEWFIELD = :$.NF WHERE IV = :IV; /* call original procedure */ EXECUTE WINACTIV '-P', 'IVSTORNO', 'INVOICES', :$.PAR; LABEL 99;

The first part of this procedure worked perfectly: I extracted the id number of the invoice that was about to be cancelled and updated the reason field. But when I went to cancel it, the cancelling procedure failed. When I removed my code out of this procedure, the cancellation worked properly (thus showing that the 'execute winactiv' call was correct).

I was facing a conundrum: how can I update the invoice when the only way of getting its number is via the :$.PAR procedure? Should I access the parameter, apparently :$.PAR gets emptied and so the call to IVSTORNO will fail. I tried workarounds like saving the :$.PAR parameter in a local variable then using this variable for the IVSTORNO call but this too failed.

After much playing around, I did find a method that didn't involve accessing the :$.PAR parameter. My thinking was that instead of trying to locate the invoice that is to be cancelled, find the invoice that has been created to cancel out the original invoice; this new invoice holds a pointer to the original invoice in the field PIV. So my code went as follows

EXECUTE WINACTIV '-P', 'IVSTORNO', 'INVOICES', :$.PAR; :IV = :PIV = 0; SELECT MAX (IV) INTO :IV FROM INVOICES WHERE TYPE IN ('A', 'C') AND STORNOFLAG = 'Y'; /* this will be the cancelling invoice */ GOTO 99 WHERE :IV = 0; SELECT PIV INTO :PIV /* this is the original invoice */ FROM INVOICES WHERE IV = :IV; GOTO 99 WHERE :IV = 0; UPDATE INVOICES SET NEWFIELD = :$.NF WHERE IV = :IV; LABEL 99;

This actually worked but I wasn't too happy with it as someone else might be cancelling an invoice at the same time. Let's say that it has a 99.99% chance of working as opposed to a 100% chance (i.e. certainty). In other words, a kludge.

An hour or so later, the correct solution popped into my mind

LINK INVOICES TO :$.PAR; GOTO 99 WHERE :RETVAL <= 0; :IV = 0; SELECT IV INTO :IV FROM INVOICES WHERE IV > 0; UNLINK AND REMOVE INVOICES; GOTO 99 WHERE :IV = 0; UPDATE INVOICES SET NEWFIELD = :$.NF WHERE IV = :IV; /* Restore the value in :$.PAR */ LINK INVOICES TO :$.PAR; GOTO 99 WHERE :RETVAL <= 0; INSERT INTO INVOICES SELECT * FROM INVOICES ORIG WHERE IV = :$.IV; UNLINK INVOICES. /* call original procedure */ EXECUTE WINACTIV '-P', 'IVSTORNO', 'INVOICES', :$.PAR; LABEL 99;

I've bolded the new parts of this procedure. The first part of this code is the same as my original code albeit with one significant difference: :$.PAR is unlinked and removed. This basically frees up the parameter for future use. After the invoice is updated with the cancellation code, INVOICES is once again linked to :$.PAR, but now the linked file is empty; the original record is then inserted into the linked file and the file closed. As far as IVSTORNO is concerned, nothing has changed; :$.PAR contains the key to the invoice that must be cancelled.

Monday, 3 October 2022

Beware when using SQL.LINE

In my experience, there are two general cases of insertion into a table during a procedure: the table is normally one of the STACK tables or GENERALLOAD, but in certain cases can be another table, normally a private one that has been defined specially for the procedure. The identity of the table is not important in the cases that I am going to describe. The two types might be called 'explicit' and 'implicit', with reference to the key field of the table into which data will be inserted.

I am going to describe in general terms the copying of a customer order. This would use the GENERALLOAD table; the fields of the order header would go into a tuple whose value for RECORDTYPE would be '1' and whose LINE would be 1. The technique for copying the order lines depends on whether only the lines are being copied, or whether any sub-forms of the lines are being copied as well. In the first case (no sub-forms), one can simply write

INSERT INTO GENERALLOAD (LINE, RECORDTYE, ... SELECT 1 + ORDERITEMS.LINE, '2', ...
If one were feeling adventurous, or there was no natural key for the sub-form, one could replace ORDERITEMS.LINE with SQL.LINE. What is important is that this number is incremented by one every time, as line 1 in GENERALLOAD holds the header line. This is what I would describe as 'implicit' inserting.

Should there be sub-forms, the data has to be entered by means of a cursor, where first line data is added then sub-form data. As there will no longer be any correspondence between GENERALLOAD.LINE and ORDERITEMS.LINE, one has to maintain a local variable (normally :LINE) whose value is incremented prior to every insert. This is 'explicit' inserting. At the same time, data for the sub-form could be inserted either implicitly or explicitly.
:LINE = 1; INSERT INTO GENERALLOAD (LINE, RECORDTYE, ... SELECT :LINE, '1', {header data}; DECLARE C1 CURSOR FOR SELECT ORDERITEMS.ORDI, .... OPEN C1; GOTO 200 WHERE :RETVAL <= 0; LABEL 100; FETCH C1 INTO :ORDI, .... GOTO 200 WHERE :RETVAL <= 0; :LINE = :LINE + 1; INSERT INTO GENERALLOAD (LINE, RECORDTYPE, ... VALUES (:LINE, '2', {line data} ...); /* sub-form */ INSERT INTO GENERALLOAD (LINE, RECORDTYPE, ... SELECT :LINE + SQL.LINE, '3', {sub-form data} ...; LOOP 100; LABEL 200; CLOSE C1; EXECUTE INTERFACE ....
There is a deliberate mistake in the above code, but first let's think it through. The value of GENERALLOAD.LINE for header data will obviously be 1, and the value of this field for the first line's data will be 2 (as the variable :LINE is explicitly incremented). For the first line of the sub-form data, GENERALLOAD.LINE will be :LINE + SQL.LINE, i.e. 2 + 1, or 3. The second line will have GENERALLOAD.LINE = 4. This can be represented in the following table
 
GENERALLOAD.LINE DATA
1 order header
2 first line of order
3 first line of sub-form for order line 1
4 second line of sub-form for order line 1

So far so good. For the second order line, :LINE will explicitly be incremented, so this line will have GENERALLOAD.LINE = 3 ... except for the fact that there is already a tuple with this key value in GENERALLOAD, and so the second line will not be inserted. What is missing is the following line:
SELECT MAX (LINE) INTO :LINE FROM GENERALLOAD;
This line should appear just before LOOP 100. As a result of this line, :LINE will have the value 4 after the insertion of the second sub-form line, and as this value is incremented prior to inserting the second order line, this line will have GENERALLOAD.LINE = 5.

One can generalise this: whenever one uses the construct 'INSERT INTO ... SELECT SQL.LINE', one must remember to increment SQL.LINE with a variable (such as :LINE or :MAX), then after the INSERT statement should come the statement that selects the maximum key number inserted so far into the above variable.

Such a simple heuristic, so easy to forget: if this variable (:LINE) is not used again, it seems that there is no point in extracting its value. For example, I have written many procedures that send a report by email: the procedure starts by defining some variables, then there is an insert statement into STACK4 (or similar) based on those variables, using SQL.LINE as the key field, then a report is executed passing STACK4 as its data. There is no loop and so there is no need to select the maximum value of KEY from STACK4. 

The problem rears its head when such code - which was written for a non-looping procedure - gets copied into a procedure that does have a loop. For example, I wrote a somewhat complicated procedure that collects BOM data from all the lines in a given order; this was then extended to work on several orders. Each order is selected via a cursor; local variables have to be reinitialised for each new order, but as the data are being inserted into the same STACK table, the key value should continue to increment. In other words, the maximum value of KEY should be extracted before the LOOP command that causes a new order to be selected.

This blog is of course being written because I fell foul of this heuristic. In this case, the procedure had to call an external program for calculating budget use; this program appears to work on one year's data at a time. As a result, I was forced to use the rather arcane structure of having a loop at the level of SQLI stages as shown below.

Stage remarks
10 Set up variables
20 increment variables
30 external procedure
40 insert the data returned from the external procedure into a special table. At the end check the terminating condition and set :$.GO appropriately
50 GOTO 20 if the end has not been reached, 60 if it has been reached
60 cleanup

One can guess what the problem was: the code in stage 40 had originally been written for one year's data and used the 'INSERT INTO ... SELECT SQL.LINE' construct without having 'SELECT MAX (LINE)' at its end. When the procedure was run, it seemed to work. Let's say that the first year had 200 lines to be inserted, the second year 300 lines and the third year 100 lines. In this case, the 200 lines of the first year would get inserted without problem; the first 200 lines of the second year would not be inserted because SQL.LINE would return values that had already been inserted, but the final 100 lines (i.e. 201-300) would be inserted, giving the impression that the procedure worked for this year. Data from the final year would not be inserted at all.

It took me quite some time to figure out what the problem was; this was exacerbated by the facts that the procedure was working on a client's data (that are unfamiliar to me) and that the procedure was based on complicated code involving budgets (that too are somewhat unfamiliar to me). All the checks that I inserted (sorry for the inadvertent pun) showed the expected results, but somehow data was not being inserted into the table. I think that I wrote about a similar problem years ago: when data does not get inserted into a table, check its primary key.

It was less clear in this case because the loop construct was not within the same stage as the data insertion, but even so .... After 'INSERT INTO ... SELECT SQL.LINE', ALWAYS add 'SELECT MAX (LINE)' at the insert statement's end.

Wednesday, 19 January 2022

:HTMLFNCTITLE

The documentation about this variable is as follows: The report title can be changed at runtime in the SQLI step preceding the report step, using the :HTMLFNCTITLE variable. When would one need/desire to use this variable? When one wants the title of a report to contain the value of a variable; for example, 'List of orders entered on 18/01/22' could be achieved by the following means

:PAR1 = DTOA (SQL.DATE8 - 24:00, 'DD/MM/YY'); SELECT ENTMESSAGE ('$', 'P', 10) INTO :HTMLFNCTITLE FROM DUMMY;
where message 10 would be 'List of orders entered on <P1>'  (I use this method to get Hebrew strings into a program - it's easier this way than writing naked strings in the program text that get reversed ... or not). As the documentation states, the above should be in an SQL statement prior to the report whose title one is creating.

This technique is especially useful in 'HTML documents', especially here. But should one does use this technique when there are more reports to be displayed after the one whose title is to be changed, then one must insert another SQLI stage after the given report to nullify the value of :HTMLFNCTITLE.

One might be tempted to use :HTMLFNCTITLE in an 'ordinary' procedure consisting of an SQLI step to collate/calculate data, followed by a CHOOSE step then a few reports, where one will be activated according to the value of the CHOOSE step. Don't get me wrong: the technique will still work, provided one inserts an SQLI step before the desired report. 

So where's the pitfall? 'Ordinary' reports created without :HTMLFNCTITLE display their title (this may come from the report's title - which is limited in length - or from the sub-form 'output title) along with the parameters used to create the report. I find this information very useful when debugging reports that other users have created (e.g. answering the question why a given order doesn't appear in a report - because the parameters of the report don't match those of the order).  This information does not appear in a report whose title has been changed by :HTMLFNCTITLE.

So: it's ok to use :HTMLFNCTITLE in an 'HTML document' but not advisable to use it in an ordinary multi-step report.

Friday, 12 November 2021

Executing Priority Commands from an External Application

This is another topic that is documented in the SDK (in chapter 15, advanced programming tools), but as always, what is written is somewhat ambiguous and a few more examples would not have hurt. After having spent no small amount of time figuring out what works, I want to record my experiences here.

But first of all: why do I need to execute a Priority command from an external application? Lately we've been working on raising the level of computerisation on the factory floor by means of barcode readers. My original concept was that there would be a computer running Priority with a special, private and bespoke, screen being displayed, into which barcodes are scanned. The screen has a post-insert trigger for the lines that does something useful and clever.

The next step came from someone asking why this computer has to be running Priority? There are both advantages and disadvantages to this: maybe the disadvantages can be overcome without losing the advantages? So I suggested writing a program in Delphi that accepts the barcodes, saves them to a file and then transfers that file to the server where it is read by a procedure that is activated every half an hour.

An even better solution would be that the computer running a program in Delphi would accept a barcode and transmit it immediately to Priority, possibly handling that barcode at the same time. I had an arrangement like this twenty years ago! It's called Remote Execution, or Rexec.

In a part of the SDK that I had managed to miss was the section that I needed: the calling computer runs a program called WINRUN on the server that runs WINACTIV that runs a procedure (this is only one of the possibilities documented). As I wrote at the beginning, the documentation is ambiguous and I want to explain it fully here.

First off: TABULA.INI. I found the copy of this file that is stored on the SQL server (i.e. the copy that is called when Priority is run on the server) - it's stored in C:\WINDOWS of the server and so is not normally visible. On our server there is a directory D:\pr_SQL that has subdirectories BIN.95, system, etc - i.e. Priority; each installation chooses its own directory for this. This directory is mapped as X: on client computers; on the server I copied c:\windows\tabula.ini to x:\1\tabula.ini, making this file visible to all. On the client computer, where my Delphi program will be run, I defined an environmental variable thus: set TABULAINI=x:\1\tabula.ini. This only has to be done once on the computer.

I then wrote a procedure in Priority, TEST_UPDCONST. This procedure simply updates a value in a private table whenever it is run - this way I could see whether what I was doing was working. Once I had the command debugged, I then investigated how I could pass parameters to the procedure (this obviates the need to transfer files or similar).

Here is the final command that must be sent to Priority, along with comments.

Command part Description Constant
X:\BIN.95\WINRUN Winrun is the program that is run on the server; it is situated at X:\BIN.95 on my server. The location should be BIN.95 on the drive where Priority is located. Y
""
Two double quotation marks. Don't ask me why. Y
tabula 123456
The account name and password. It makes sense to have tabula run the procedure, thus preventing any problem with permissions. Of course, the password is not 123456.
x:\system\prep
This directory is always used when preparing procedures. Y
demo
The name of the company (environment) in which the command will be run
X:\BIN.95\WINACTIV.EXE This is the name of the program that runs the procedure. It didn't work without prefacing the name with its location. Y
-P A procedure is being run Y
TEST_INSERTNESTBC This is the name of the procedure
-var:BC "A09" First variable that is being passed to the procedure, whose name is BC and whose value is "A09"
-var:TS "12/11/21 09:04" Second variable that is being passed to the procedure, whose name is TS and whose value is "12/11/21 09:04" (i.e. the current date and time)

Put together, the complete command is X:\BIN.95\WINRUN "" tabula 123456 x:\system\prep demo X:\BIN.95\WINACTIV.EXE -P TEST_INSERTNESTBC -var:BC "A09" -var:TS "12/11/21 09:04".

In the above table, there is a column 'constant'; if the value is Y, then the command part should be written as is (obviously substituting the correct directory on the server). The SDK does not mention that the parameters of the command are case sensitive: if one is passing parameters to the procedure being called, then one must write "-var" and not "-VAR" as I originally did. One optional parameter is "err <filename>": this causes error messages from the called procedure to be written in this file; again, it's "err" in lower case, not "ERR".

The SDK also documents how the called procedure should accept its parameters; I thought that this was ambiguous but turns out to be accurate. In the case shown above, the called procedure (TEST_INSERTNESTBC) has two char parameters, BC and TS. Inside the procedure, they are referred to as :EXTERNAL.BC and :EXTERNAL.TS, not :$.BC and :$.TS. My misunderstanding was with reference to 'EXTERNAL'; in chapter 2 of the SDK there is a list of predefined variables, such as :PAR1, :PAR2, :PAR3 and :KEYSTROKES. I've noted in this blog a few undocumented variables to add to this list. I did not notice that the last variable appearing in the list is :EXTERNAL.VARNAME, "used in procedures to refer to variables inputted [sic] as part of the WINRUN command". This is similar to the half-documented variable :PROGPAR.VARNAME that I discussed here.

So now I have a very powerful tool at my disposal. First it will be used in a specific setting (that received its inspiration from another Delphi program that I developed this week that avoided having to access Priority), but I'm sure that in the coming months, this technique will be used more frequently.

It should be noted that this technique is s-l-o-w: it takes about five minutes for the entire process.

Thursday, 9 September 2021

Another undocumented variable in Priority - :ERRMSG

I was faced today with the problem of trapping a certain condition when a line is entered into a form by means of an interface. I want to check all lines of a certain type that a condition does not occur; if it does, then I want to output an error message and stop the procedure. My initial code was as follows:

ERRMSG 353 FROM TEST_LOADSIFORDERS WHERE USER = SQL.USER AND RECORDTYPE = '2' AND U1 = '' AND PARTNAME = '000' AND STRIND (TEC_PN, 1, 8) = 'ASSEMBLY';

The only important thing here is the first line: if the clause is true, then error message 353 will be displayed. The error message itself says that a line has something missing. Unfortunately, this error message doesn't say on which line the error occurred (which wouldn't be useful, either) or which part is incompletely defined. It would be good if some identifier (TEC_PN) could be shown in the error message, but this concise syntax doesn't allow this. Another possibility is a loop, but that's something that could/should be avoided.

Then I remembered  that I once wrote about two undocumented variables that can be useful in designing screen triggers, :GOTO and :WRNMSG. These two variables were used in a similar context: if some condition occurred, then the appropriate warning message would be displayed and execution would transfer to the label stored in :GOTO. I wondered whether there is an undocumented variable, :ERRMSG ... and it turns out that there is. So my code is now

SELECT 353, TEC_PN INTO :ERRMSG, :PAR1 FROM TEC_LOADSIFORDERS WHERE USER = SQL.USER AND RECORDTYPE = '2' AND U1 = '' AND PARTNAME = '000' AND STRIND (TEC_PN, 1, 8) = 'ASSEMBLY';

with the error message containing both unchanging text and P1, thus killing two birds with one stone. The disadvantage of this code as opposed to a loop is that the above ignores the possibility that two lines might have the error. I don't have any statistics at hand but I imagine that this error occurs about once every two hundred lines, which is why I don't want to slow down the 199 lines that are correct.

Monday, 19 July 2021

Choose-field trigger for procedural parameters

One of my clients wanted a report that would compare the price of the same part at different dates. The comparison part is simple (although the report should include dynamic column titles, an interesting technique) but choosing the price lists is complicated: a supplier can have several price lists that are active concurrently, and a price list does not have an easily choosable primary key.

The ideal user interface would be supplier number and two dates, where the dates come from the supplier's price lists. This requires what could be termed a conditional choose-field trigger, as the trigger's code is dependent on the supplier's number. This is, to be blunt, very difficult to achieve in Priority.

I wrote on this topic a few years ago and quoted text from the SDK that purports to explain. I'll quote the text again here so that I might comment upon it: You can also write a specific CHOOSE-FIELD or SEARCH-FIELD for the procedure. Your trigger can contain references to any input value specified by the user within the same [emphasis mine] procedure step. For instance, if the procedure step contains an input parameter called CST, its value will be stored in the :PROGPAR.CST variable. This is useful, for example, if a given procedure step contains an input column for a Sales Rep and another input column for a Customer, and you want the Choose list for the latter column to display only those customers that are associated with the specified sales rep.

It would have helped greatly had there been presented a detailed example of this usage, or at least to give an example of a procedure that uses this technique. I tried to write a trigger using this technique; I assumed that PROGPAR was the name of the procedure so tried to do something similar with TEST_COMPAREPRICELIST.SUP but I received an error message.

In the end, I wrote a slightly complicated report that displays the text of a given trigger for parameters in procedures; using this report, I was able to see that standard report INCOME_BUDGETS uses this technique in step 4 for the parameter VR -
SELECT VERDES,VERCODE,VALID FROM BUDVERSIONS WHERE GL = (SELECT GL FROM GENLEDGERS WHERE GLNAME = :PROGPAR.GLF AND GL <> 0) ORDER BY 3 DESC,2;
From this code, it can be seen that :PROGPAR is the name of a variable and not the name of a procedure; GLF is a parameter that is connected to a table. But this is not as simple as it seems: GLF receives a value that is calculated in a previous stage. Is this necessary?

After banging my head against a brick wall for a while, I finally found the correct combination of parameters that will allow me to choose a supplier then receive a list of dates for the supplier's price lists. The procedure uses a regular 'line' parameter, SUP, that is defined to receive a supplier's number. The important fact to remember is that in the choose-field trigger, :PROGPAR.SUP is the value that is entered in the SUP parameter (in this case, the supplier's external number), not the internal number of the supplier and not the name of the table that is linked to the parameter.

The choose-field trigger for DT1 that actually works is as follows. Converting the dates to integers allows them to be sorted in reverse order (newest first); otherwise they get sorted in ASCII order which is incorrect.
SELECT DTOA (SUPPRICELIST.SUPPLDATE, 'DD/MM/YY'), DTOA (SUPPRICELIST.SUPPLDATE, 'DD/MM/YY'), ITOA (SUPPRICELIST.SUPPLDATE) FROM SUPPRICELIST, SUPPLIERS WHERE SUPPRICELIST.SUP = SUPPLIERS.SUP AND SUPPLIERS.SUPNAME = :PROGPAR.SUP ORDER BY 3 DESC;

Wednesday, 14 April 2021

Sonraw

The external program SONRAW is invaluable: it takes a list of parts and builds an 'exploded' bill of materials for each part. Unfortunately, there are also a few very important 'gotchas' of which we have to be aware. These arise due to what might be considered a poor decision by the original system developers: the output of SONRAW is stored in the table PARTARC, which is the same table in which are stored the default BOM data.

A brief explanation 
Putting this another way, the 'normal' table PARTARC stores a parent part and its direct sons, i.e. to a depth of 1. Let's say that there is a parent part A and has sons B, C and D: there will be three rows in the PARTARC table where the parent is A. But B, C and D might have sons of their own (eg B1, B2, B3, C1, etc). In this case there will be three rows in PARTARC where the parent is B (the sons will be B1, B2 and B3). If it's not yet clear, a part can appear both in the 'part' field of PARTARC, when it is a parent, and in the 'son' field, when it is a son.

What does SONRAW do? If there are three rows, A/B, A/C and A/D, and another three rows B/B1, B/B2 and B/B3, the resulting linked PARTARC table will also contain six rows, but these will be A/B, A/B1, A/B2, A/B3, A/C and A/D although not necessarily in this order. There is a field in PARTARC that stores the depth of the son; for A/B, the depth will be 1, but for A/B1, the depth will be 2. This data enables one to write a relatively simple report that displays the recursive tree as a list.

How does one use SONRAW? 
One can use SONRAW as a separate procedure step (see procedure PARTTREE, stage 40) or one can call SONRAW within an SQLI step; it's slightly more than a matter of preference - I find it easier to call SONRAW within an SQLI step. Whichever way, there must be five parameters passed; these are 

  1. a table linked to PART - this will contain the parent parts
  2. a date - I think that this is useful when installations have different versions of BOMs. I don't have this requirement and so always pass today's date
  3. a table linked to PARTARC - this will contain the output
  4. an integer between 0 and 4 - this indicates which option to use (look at PARTTREE, stages 10-30). I normally use 0; 2 will include phantoms in the output (?)
  5. a flag that can either be empty or '-N'; the 'N' probably stands for 'normal'. Leaving the flag empty will cause the output to be partial; see PARTTREE stage 10 for 'an explanation'
My normal code for using this is as follows
LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0; /* the parameter 2 means show phantoms as well as standard parts */ EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 2, '-N';
What comes after this depends on the application. One can unlink the tables, then pass PARTARC to a report that prints out the tree. I often have a cursor that runs after the SONRAW code in order to extract only purchase parts (and writing this now, I realise that the fourth parameter to the call above is '2'; according to PARTTREE, this means that only R parts will be included in the output).

So where are the pitfalls?

There are several fields in the PARTARC table that appear (unsurprisingly) in the PARTARC screen (and also PARTARCONE) that SONRAW trashes. Prior to SONRAW, the field PART contains the internal part number of a given parent (e.g. B in the simple example about), SCRAP will contain the percentage scrap of part B1 (the calculation of the scrap is not straightforward, but that's another story),  SONREVNAME contains the internal number of the son revision chosen for this line, and SONQUANT contains the quantity of B1 needed for one unit of B. But after SONRAW, PART contains the internal part number of the top part (i.e. A), COEF will contain the quantity of B1 needed for one unit of A, and SONQUANT will contain the internal part number of the original parent (i.e. B). SCRAP and SONREVNAME contain values but they're not what they were prior to running SONRAW (and I don't know what they are).

PARTARC.SONACT also gets trashed but in this case, the new value after SONRAW is very useful: it holds a sequential index to the lines in PARTARC (i.e. 1, 2, 3). This gets used in any report based on PARTARC as it presents the lines in the correct, hierarchical, order. But this index has added value: it can be used as an index into an auxiliary table (e.g. STACK4) that holds additional data for each line in the exploded tree.

In the past, I discovered that I have to write some obscure code in order to obtain the scrap and revision number, as follows.

DECLARE C3 CURSOR FOR SELECT PARTARC.SON, PARTARC.COEF, PARTARC.VAR, PARTARC.SONACT, ROUND (PARTARC.SONQUANT) FROM PARTARC, PART ORIG WHERE PARTARC.SON = ORIG.PART AND PARTARC.PART = :PART AND PARTARC.SON > 0 ORDER BY 4; OPEN C3; GOTO 300 WHERE :RETVAL <= 0 ; LABEL 210; FETCH C3 INTO :SON, :COEF, :VAR, :SONACT, :SQ; GOTO 299 WHERE :RETVAL <= 0; :SCRAP = 0.0; :REV = 0; SELECT SCRAP, SONREVNAME INTO :SCRAP, :REV FROM PARTARC ORIG WHERE PART = :SQ AND SON = :SON; INSERT INTO STACK4 (KEY, REALDATA, INTDATA) VALUES (:SONACT, :SCRAP, :REV);

The final lines above show :SCRAP and :REV being stored into an auxiliary table that gets passed to the report along with the linked PARTARC table, so that the report can show the necessary values. The cursor uses ROUND (PARTARC.SONQUANT) as SONQUANT is a real number, but an internal part number is an integer, of course.

I wasted about an hour today because of this problem with SONREVNAME. It's not a field that I use myself, but a company for whom I am writing an exceedingly complicated BOM report needs it. Eventually the penny dropped as to the problem: I already had the code for obtaining the correct value of SCRAP so I adopted it for this new program.

Friday, 5 February 2021

A safe method of extracting 'son' parts from a bill of materials

I am often asked to prepare a type of report that requires descending through a part's bill of materials and doing something with the 'son' parts, for example showing the cost of raw materials for each part and the cost of work for that part. Such reports (or more correctly, the procedure that prepares the data to be shown in the report) use the external SONRAW procedure along with temporary PART and PARTARC tables. It also happens that the data for each primary part is to be stored separately; for example, such a report showing raw materials for an order could display the data in two different ways: either all the parts required for the complete order, or all the parts required for each line in the order. In the first case, SONRAW would be called once for the order, whereas the second case might require SONRAW to be called in a loop.

I would start such a procedure with the lines

LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0;
Then parts would be entered into the linked table in the following manner
INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) FROM ORDERITEMS, PART ORIG WHERE ORDERITEMS.PART = ORIG.PART AND ORDERITEMS.ORD = :ORD AND ORIG.TYPE = 'P' GROUP BY 1, 2; EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 0, '-N';
The above example assumes that the code is run in a loop, each time for a different order (represented by :ORD). After SONRAW, something would be done with the 'exploded tree' that is stored in the linked PARTARC table.

So far, so good. So where is the problem? It would often happen that I would add two lines before the 'INSERT INTO PART' statement -
DELETE FROM PART; DELETE FROM PARTARC;
These are linked tables, right? And so the deletion takes place on these linked tables. And yet somehow, whilst running a procedure that uses this code yesterday, the real PART and PARTARC tables were deleted! I have run such code thousands of times and there has never been a problem with it, but something must have happened to prevent the linkage and so the deletion removed the real tables. 'Fortunately' this was at the beginning of the day, so the previous day's backup was sufficient to restore the missing data.

As this is the second time in a month that I have managed to delete real tables, I am now trying to adopt the attitude that the DELETE statement MUST NEVER BE USED!!! So how can I ensure that I have empty temporary tables before the insertion of new parts? By using UNLINK and REMOVE:
UNLINK AND REMOVE PART; UNLINK AND REMOVE PARTARC; LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0; INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) ...
This looks like code waiting to be optimised - surely unlinking and then relinking a table will take time - but this is safe code, something much more important than saving a few seconds (as opposed to shutting down a company for several hours whilst data is restored - I claim that I'm checking the disaster recovery procedure).

What does the SDK have to say on this topic? Use the AND REMOVE option if you wish the linked file to be deleted when it is unlinked. This is necessary when working with loops, particularly when manipulations are carried out on the data in the linked file. If you do not remove the linked file, and the function using LINK and UNLINK is called more than once, you will receive the same copy of the table during the next link. So, if you want the LINK command to open a new (updated) copy of the table, use UNLINK AND REMOVE. That's fairly clear for a change. So as modern day Americans say (in English that would have thrown me out of school), "my bad".

Thursday, 4 February 2021

A method of displaying in a report values that can come from one of several tables

It's not exactly secret that a report in Priority can display in one field a value that can come from one of  several tables, but this useful functionality is hidden so well that it might be that many programmers never come across it. An example of what I mean would be a report that displays sales to a customer, where a sale is defined as either a delivery note (DOCUMENTS) or a direct invoice (INVOICES) - specifically screens DOCUMENTS_D and AINVOICES. Such a report would show either DOCUMENTS.DOCNO or INVOICES.IVNUM in the 'document' field. I'm sure that there is a standard report that does this, but I couldn't find it, so instead look at the standard report DISTRLISTDOC, field #120.

I often use this technique in a procedure, normally using STACK8: in one of the key fields I would store the document's A value (i.e. DOCUMENTS.DOC or INVOICES.IV), and in the second key field, I would store a value to distinguish between the two, i.e. 1 means DOCUMENTS and 2 means INVOICES. When it comes to the report, I have to add several calculated fields:

DOCUMENTS.DOC ... = (STACK8.KEY2 = 1 ? STACK8.KEY1 : 0) INVOICES.IV ... = (STACK8.KEY2 = 2 ? STACK8.KEY1 : 0) (document number) (STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) (document date) (STACK8.KEY2 = 1 ? DOCUMENTS.CURDATE : INVOICES.IVDATE)

This isn't particularly difficult to do when there are only two tables involved, but sometimes there are more, and these expressions become hairy. I was working on a report that can display data from these screens: DOCUMENTS_A, DOCUMENTS_D, DOCUMENTS_P, YINVOICES, ORDERS and PORDERS. It's a report that displays documents connected to a project, hence the variety of data. The 'selector' code (e.g. STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) has to be written at least three times (once for the document number, once for the data, once for the screen that should be opened for the document and possibly once for the customer) and of course the possibility of error grows very quickly.

The other day it occurred to me that I could remove this complexity with a simple change: instead of storing the document number and type in the STACK8 table, I could use a few fields that exist in STACK4, namely 'docno', 'datadate' and 'chardata'. The primary key is a simple incrementing variable bearing no information; DOCUMENTS.DOCNO, INVOICES.IVNUM, ORDERS.ORDNAME, etc are stored in the 'docno' field; DOCUMENTS.CURDATE etc are stored in the 'datadate' field, and DOCUMENTS.TYPE (or similar) is stored in the 'chardata' field.

INSERT INTO STACK4 (KEY, INTDATA, DETAILS, DOCNO, REALDATA2, CHARDATA, DATADATE) SELECT :LINE + SQL.LINE, :KEY, :DETAILS, DOCUMENTS.DOCNO, (-1.0) * SUM (PART.COST * REALQUANT (TRANSORDER.QUANT)), DOCUMENTS.TYPE, DOCUMENTS.CURDATE FROM DOCUMENTS, TRANSORDER, PART, DOCUMENTSA, PART MPART ...
The report that receives the data no longer has to 'decode' the document number: it simply displays STACK4.DOCNO; the same for the date. The only field that is not simplified is that the one that chooses which screen should be displayed if one clicks on the document number. I suppose that I could store the appropriate EXEC value in the stack; maybe this will come as a future improvement. The reason why I hesitate to do this is that there is no link between a table of data and the screen in which it is to be displayed. The way to do this is to add several queries such as
:EXEC = 0; SELECT EXEC INTO :EXEC FROM EXEC WHERE ENAME = 'ORDERS' AND TYPE = 'F'; ---- or ---- :EXEC = 0; SELECT DOCTYPES.EXEC INTO :EXEC FROM DOCTYPES, DOCUMENTS WHERE DOCTYPES.TYPE = DOCUMENTS.TYPE AND DOCUMENTS ....
This is a rare instance of something that is harder to do in a procedure than it is in a report. At the moment, the value in 'chardata' serves to identify which screen to display, in the following manner:
EXEC.ENAME = (STACK4.CHARDATA = 'O' ? 'ORDERS' : (STACK4.CHARDATA = 'Y' ? 'YINVOICES' : (STACK4.CHARDATA = 'A' ? 'DOCUMENTS_A' : (STACK4.CHARDATA = 'D' ? 'DOCUMENTS_D' : (STACK4.CHARDATA = 'P' ? 'DOCUMENTS_P' : (STACK4.CHARDATA = 'Q' ? 'PORDERS' : 'F'))))))

I should explain the final screen name, F. The report display some data that is not connected to a screen; It happened that when the final line was 'PORDERS' : '')))))), these data were not displayed. Priority has a screen called 'F' that is based on the DUMMY table and seems ideal for this purpose.

Tuesday, 2 February 2021

Filtering on non-key fields

Nearly five years (!) have passed since I last wrote on this topic (then called 'zero values in Priority') on the other blog. I will explain again: normally, when one adds parameters to procedures, the parameter is based on a field that is guaranteed to have a value (e.g. customer number in an order, order type or similar). But it often happens that one wants a parameter on a field that is not guaranteed to have a value - today's blog will discuss filtering customers by their 'customer type' field that is not mandatory.

The original post showed how to filter successfully when a cursor is used, but I never found a satisfactory solution to filtering when tuples are saved using the 'insert/select' syntax. Looking at the old code, an answer popped into my head.

The first part is the same as ever:

:LINKED = :UNLINKED = 0; SELECT COUNT (*) INTO :UNLINKED FROM CTYPE; LINK CTYPE TO :$.CTP; ERRMSG 1 WHERE :RETVAL <= 0; SELECT COUNT (*) INTO :LINKED FROM CTYPE;

But the way that this is used is different

INSERT INTO STACK4 (KEY, INTDATA, REALDATA) SELECT ORDERS.CUST, CUSTOMERS.CTYPE, SUM (ORDERS.DISPRICE) FROM ORDERS, CPROFTYPES, BRANCHES, CUSTOMERS, CTYPE WHERE ORDERS.ORDTYPE = CPROFTYPES.CPROFTYPE AND ORDERS.BRANCH = BRANCHES.BRANCH AND ORDERS.CUST = CUSTOMERS.CUST AND CUSTOMERS.CTYPE = CTYPE.CTYPE AND ORDERS.ORDSTATUS <> -6 AND ORDERS.CURDATE BETWEEN :$.FDT AND :$.TDT GROUP BY 1, 2 HAVING SUM (ORDERS.DISPRICE) >= :$.NUM; GOTO 1 WHERE :LINKED = :UNLINKED; /* The user requested specific values of CTYPE, so remove from STACK4 any tuples with intdata (ie ctype) = 0 */ DELETE FROM STACK4 WHERE INTDATA = 0; LABEL 1;

In other words: collect and store in stack4 all the data as one would normally do. If the linked CTYPE table contains all the values, then all the suitable customers are selected: nothing new here. If only certain values are chosen in CTYPE, then all the customers with those values along with the customers with no value (i.e. CUSTOMER.CTYPE = 0) are chosen (selected).

After the insertion statement has finished, there is a check to see whether only specific values of CTYPE were chosen, and if so, then all tuples with the value 0 are deleted from the stack.

Friday, 29 May 2020

Calling a procedure from a screen trigger

I am starting work as a sub-contractor (i.e. someone else is the consultant who interfaces with the client and defines their work procedures whereas I'm the programmer who implements those procedures in Priority) for a company that wants to have one master company in which are defined the customers, suppliers, parts, etc. and satellite companies in which the actual transactions take place. My primary task is to write interfaces that will automatically transfer data from the master company to the satellites. 

For some tables (e.g. supplier type), new/updated data will be copied to all the satellites, whereas for other tables (e.g. customers), the user can decide to which satellites will be copied the new/updated customer. Upon hearing this, I realised that I have to define a table with four fields: the id of the new datum, the type of the new datum, satellite name and whether the data should be copied to this satellite. I also developed a form based on this table; this forms will be a 'son' form to all the forms that allow a choice of satellite. This form is easy to define as it takes its inspiration from another general form, EXTFILES. The fun starts in the copying procedure.

Within Priority, there exists a documented method for sending data from one procedure or trigger to another procedure or report. I use this frequently when building procedures which by means of the scheduler will send reports by mail. The procedure uses a local copy of one of the 'stack' tables (normally STACK which has precisely one field - ELEMENT - which is of course the table's primary key). For example,
:GROUP = 'NO_CNC'; EXECUTE WINACTIV '-R', 'TEST_NOPROD_METAL', 'STACK', :$.STK, '-g', :GROUP;
My original thought would be to do something similar in the form's post-form trigger: pass the parent form's id number to a procedure that then iterates through the chosen satellites, invoking an interface to do the actual addition to the satellite. As each procedure/interface is specific to a given form (i.e. the code necessary for updating customers is similar but different to the code necessary for updating suppliers), there is no need to pass the data's type (customer, etc) to the procedure. As only one datum is required, I could use the STACK table as shown above.

The communal form's POST-FIELD trigger had code like this:
GOTO 2 WHERE :TEST_CHANGED = 0; SELECT SQL.TMPFILE INTO :FILE FROM DUMMY; LINK STACK TO :FILE; INSERT INTO STACK (ELEMENT) VALUES (:$$.NSCUST); GOTO 2 WHERE :RETVAL <= 0; GOTO 1 WHERE :$$.EXTTYPE <> 'p'; /* not a part */ EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYPART', 'PART', :FILE; GOTO 2; LABEL 1; GOTO 1 WHERE :$$.EXTTYPE <> 'C'; /* not a customer */ EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYCUST', 'PART', :FILE; GOTO 2; LABEL 1; ... LABEL 2; UNLINK AND REMOVE STACK;
This seemed reasonable but it didn't work! I spent a very frustrating hour discovering that the procedure was not receiving data. It transpires that every procedure has to have its data passed in a linked table of the correct type: the 'copy part' procedure has to receive a linked table of parts and the 'copy customer' procedure has to receive a linked table of customers, etc. So the final code became
GOTO 2 WHERE :TEST_CHANGED = 0; SELECT SQL.TMPFILE INTO :FILE FROM DUMMY; GOTO 1 WHERE :$$.EXTTYPE <> 'p'; /* not a part */ LINK PART TO :FILE; INSERT INTO PART SELECT * FROM PART ORIG WHERE PART = :$$.NSCUST; EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYPART', 'PART', :FILE; UNLINK AND REMOVE PART; LABEL 1; GOTO 1 WHERE :$$.EXTTYPE <> 'C'; /* not a customer */ LINK CUSTOMERS TO :FILE; INSERT INTO CUSTOMERS SELECT * FROM CUSTOMERS ORIG WHERE CUSTOMER = :$$.NSCUST; EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYCUSTOMER', 'CUSTOMERS', :FILE; UNLINK AND REMOVE CUSTOMERS; LABEL 1; ... LABEL 2;
Using the BACKGROUND parameter after EXECUTE means that the actual copying occurs in the background and so allows the user to continue working without interruption.

One final problem to look out for: users must be able to execute the copying procedures, which means that they have to appear in a menu for which regular users have access.