Friday 17 December 2021

Extending the Priority Cross-referencer

The other day I added some new code to a procedure that I had written several months ago. The purpose of the code was to sum the quantity of a raw material found in the kititems of the work orders belonging to a given order line. The actual query for a given line worked correctly when I checked it in WINDBI, but inside the procedure it didn't work properly. The code itself was within a cursor loop and it was clear that the loop did not complete.

Eventually I realised that I had left out a few of the commands necessary for a cursor, primarily missing checking RETVAL after fetching a new tuple (to see whether there are more tuples). The procedure worked correctly once I added the missing commands. 

Just to be clear, the complete order of commands should be

  1. Declare
  2. Open
  3. Retval 
  4. Fetch
  5. Retval
  6. Loop
  7. Close
Once I had the procedure working, I decided to add more checks to PrioXRef. 'Retval' and 'Loop' differ from the other cursor commands in that the others are always followed by the cursor name, whereas 'Retval' and 'Loop' aren't. Just to complicate matters, 'Retval' is a variable (it has the colon prefix).

After some thinking, I figured out that I needed to store cursor names in a stack (specifically, after 'declare'), so that 'Retval' and 'Loop' can reference the current cursor name. 'Close' 'pops' the current cursor off the stack. This part eventually worked well but the analysis of the nodes in the parse tree for a cursor became more complicated, primarily because 'Retval' has to appear twice in specific places. After working on this for a while, I wrote new code that checks that all seven commands appear in the correct order. This code can't check in a strictly linear fashion: let's say that the first 'retval' is missing; the third node in the cursor tree will be 'fetch' which is not the third identifier ('retval'), but it is the fourth identifier. 

I've only checked the new version against a simple, toy, procedure, but next week I'll check the version against real code with multiple cursors, some of which are cursors within cursors (this is the reason for the stack).

Edit from 18/12/21: It seems that I was premature in stating that LOOP had been handled. I woke up this morning, realising that there are cases of LOOP that I had not handled: in about 20% of the procedures that I write, there are multiple LOOP statements for the same cursor. Very occasionally there is a procedure that has a LOOP with no cursor - this generally happens when iterating over a series of dates. Both of these cases would break what I wrote yesterday.

Edit from 03/01/22: Unfortunately I have to rewrite the code for RETVAL. Quite often I write code with a cursor (or even two) embedded within a cursor. Whilst the first cursor would have 'goto 500 where :retval <= 0' after the 'open' command, the internal cursor would have 'loop 100 where :retval <= 0' if there is only one cursor. There wouldn't be a problem if there were two internal cursors as presumably the second has to execute even if the first fails (and so after 'open' would be written 'goto <label> where :retval <= 0). It looks like I am going to have to write some kind of state machine with much more detail than I have done so far (e.g. the only cursor command that can come after 'declare' is 'open'; anything else is a mistake). 

Friday 12 November 2021

Executing Priority Commands from an External Application

This is another topic that is documented in the SDK (in chapter 15, advanced programming tools), but as always, what is written is somewhat ambiguous and a few more examples would not have hurt. After having spent no small amount of time figuring out what works, I want to record my experiences here.

But first of all: why do I need to execute a Priority command from an external application? Lately we've been working on raising the level of computerisation on the factory floor by means of barcode readers. My original concept was that there would be a computer running Priority with a special, private and bespoke, screen being displayed, into which barcodes are scanned. The screen has a post-insert trigger for the lines that does something useful and clever.

The next step came from someone asking why this computer has to be running Priority? There are both advantages and disadvantages to this: maybe the disadvantages can be overcome without losing the advantages? So I suggested writing a program in Delphi that accepts the barcodes, saves them to a file and then transfers that file to the server where it is read by a procedure that is activated every half an hour.

An even better solution would be that the computer running a program in Delphi would accept a barcode and transmit it immediately to Priority, possibly handling that barcode at the same time. I had an arrangement like this twenty years ago! It's called Remote Execution, or Rexec.

In a part of the SDK that I had managed to miss was the section that I needed: the calling computer runs a program called WINRUN on the server that runs WINACTIV that runs a procedure (this is only one of the possibilities documented). As I wrote at the beginning, the documentation is ambiguous and I want to explain it fully here.

First off: TABULA.INI. I found the copy of this file that is stored on the SQL server (i.e. the copy that is called when Priority is run on the server) - it's stored in C:\WINDOWS of the server and so is not normally visible. On our server there is a directory D:\pr_SQL that has subdirectories BIN.95, system, etc - i.e. Priority; each installation chooses its own directory for this. This directory is mapped as X: on client computers; on the server I copied c:\windows\tabula.ini to x:\1\tabula.ini, making this file visible to all. On the client computer, where my Delphi program will be run, I defined an environmental variable thus: set TABULAINI=x:\1\tabula.ini. This only has to be done once on the computer.

I then wrote a procedure in Priority, TEST_UPDCONST. This procedure simply updates a value in a private table whenever it is run - this way I could see whether what I was doing was working. Once I had the command debugged, I then investigated how I could pass parameters to the procedure (this obviates the need to transfer files or similar).

Here is the final command that must be sent to Priority, along with comments.

Command part Description Constant
X:\BIN.95\WINRUN Winrun is the program that is run on the server; it is situated at X:\BIN.95 on my server. The location should be BIN.95 on the drive where Priority is located. Y
""
Two double quotation marks. Don't ask me why. Y
tabula 123456
The account name and password. It makes sense to have tabula run the procedure, thus preventing any problem with permissions. Of course, the password is not 123456.
x:\system\prep
This directory is always used when preparing procedures. Y
demo
The name of the company (environment) in which the command will be run
X:\BIN.95\WINACTIV.EXE This is the name of the program that runs the procedure. It didn't work without prefacing the name with its location. Y
-P A procedure is being run Y
TEST_INSERTNESTBC This is the name of the procedure
-var:BC "A09" First variable that is being passed to the procedure, whose name is BC and whose value is "A09"
-var:TS "12/11/21 09:04" Second variable that is being passed to the procedure, whose name is TS and whose value is "12/11/21 09:04" (i.e. the current date and time)

Put together, the complete command is X:\BIN.95\WINRUN "" tabula 123456 x:\system\prep demo X:\BIN.95\WINACTIV.EXE -P TEST_INSERTNESTBC -var:BC "A09" -var:TS "12/11/21 09:04".

In the above table, there is a column 'constant'; if the value is Y, then the command part should be written as is (obviously substituting the correct directory on the server). The SDK does not mention that the parameters of the command are case sensitive: if one is passing parameters to the procedure being called, then one must write "-var" and not "-VAR" as I originally did. One optional parameter is "err <filename>": this causes error messages from the called procedure to be written in this file; again, it's "err" in lower case, not "ERR".

The SDK also documents how the called procedure should accept its parameters; I thought that this was ambiguous but turns out to be accurate. In the case shown above, the called procedure (TEST_INSERTNESTBC) has two char parameters, BC and TS. Inside the procedure, they are referred to as :EXTERNAL.BC and :EXTERNAL.TS, not :$.BC and :$.TS. My misunderstanding was with reference to 'EXTERNAL'; in chapter 2 of the SDK there is a list of predefined variables, such as :PAR1, :PAR2, :PAR3 and :KEYSTROKES. I've noted in this blog a few undocumented variables to add to this list. I did not notice that the last variable appearing in the list is :EXTERNAL.VARNAME, "used in procedures to refer to variables inputted [sic] as part of the WINRUN command". This is similar to the half-documented variable :PROGPAR.VARNAME that I discussed here.

So now I have a very powerful tool at my disposal. First it will be used in a specific setting (that received its inspiration from another Delphi program that I developed this week that avoided having to access Priority), but I'm sure that in the coming months, this technique will be used more frequently.

It should be noted that this technique is s-l-o-w: it takes about five minutes for the entire process.

Thursday 7 October 2021

Conditional display of sub-reports within an HTML document

Scenario: let's assume that for some customers we want to print customer orders using a certain sub-report whereas for most customers we use the default sub-report. How to do this?

The first thing that needs to be done is to add a field to the customers table/form that shows which customers are 'special'. For the purpose of this blog, I'll assume that there is a simple binary choice between 'special' and 'regular', in which case maybe three or four customers are marked as 'special'. In my real world use of this technique, the distinguishing field is in the 'activities' table and there are about nine different values, so obviously an integer field is needed and not a boolean. I'll call this distinguishing field TEST_CASE.

In the HTML procedure, within the first SQLI statement after the HTMLCURSOR stage there will be a line similar to

SELECT ORD INTO :$.ORD FROM ORDERS WHERE ORD > 0;
In real life, that statement is much longer, but irrelevant for didactic purposes. Adding our distinguishing variable to the above statement changes it to
SELECT ORDERS.ORD, CUSTOMERS.TEST_CASE INTO :$.ORD, :$.CAS FROM ORDERS, CUSTOMERS WHERE ORDERS.CUST = CUSTOMERS.CUST AND ORDERS.ORD > 0;
where CAS is defined as a single character variable.

Now for the sub-reports. As this document will be based on the existing procedure WWWSHOWORDER, it's best to copy the procedure to TEST_WWWSHOWORDER and copy the sub-report WWWORD_2R to TEST_WWWORD_2R (assuming that this is the sub-report that we want to display conditionally). Running TEST_WWWSHOWORDER at this stage should produce the same output as WWWSHOWORDER. 

The first change that we need to make is to add the parameter CAS to TEST_WWWORD_2R. Then we need to add to this report the table/field combination DUMMY.DUMMY; this field is of course set to be non-displayable. The field's expression is = 1 AND :CAS <> 'Y'  (this is a good reason why we have to copy the standard sub-report because we have to make a change to it. Otherwise any other procedure that includes this sub-report would fail for the special customers). If we now try to print orders for a 'normal' customer and a 'special' customer, the print-out for a 'normal' customer will include the order lines; these will be missing from the 'special' customer, as for this customer, :CAS will be equal to Y.

The next step is to copy TEST_WWWORD_2R to a new report, let's say TEST_WWWORD_2RA. Within this report, we make whatever changes are necessary in order to display the data in the format required for the special customers. After doing so, the DUMMY condition is changed to  = 1 AND :CAS = 'Y'.

TEST_WWWORD_2RA is now added to the TEST_WWWSHOWORDER procedure; it should have three parameters (as does TEST_WWWORD_2R) where the first two are ORD and CAS. We need to make a very important change to the 'output' variable (probably O2): for our special conditional sub-report we need to define a new 'output' variable. Of course, this variable has to be added to the final INPUT stage and the 'create HTML page' activation has to be run. This is the key step to defining a conditional report. Of course, TEST_WWWORD_2RA has to be added to the list of sub-reports for a given format.

What happens now when we run the procedure for an order belonging to a 'normal' customer? CAS will not be equal to Y; TEST_WWWORD_2R will produce output but TEST_WWWORD_2RA will not (the DUMMY condition prevents any data being displayed). If the procedure is run for an order belonging to a 'special' customer? CAS will now be equal to Y; TEST_WWWORD_2R will not produce any output but TEST_WWWORD_2RA will.

It is vitally important that the two equivalent sub-reports have different 'output' variables; if they were the same, no output would appear for the lines. One sub-report wants to produce data whereas the other doesn't, and seeing as they overlay each other (that's the meaning of the same 'output' variable), no output will appear. They need to have different 'output' variables that do not overlay each other; one will produce no output but the other will, and something is guaranteed to appear.

Thursday 9 September 2021

Another undocumented variable in Priority - :ERRMSG

I was faced today with the problem of trapping a certain condition when a line is entered into a form by means of an interface. I want to check all lines of a certain type that a condition does not occur; if it does, then I want to output an error message and stop the procedure. My initial code was as follows:

ERRMSG 353 FROM TEST_LOADSIFORDERS WHERE USER = SQL.USER AND RECORDTYPE = '2' AND U1 = '' AND PARTNAME = '000' AND STRIND (TEC_PN, 1, 8) = 'ASSEMBLY';

The only important thing here is the first line: if the clause is true, then error message 353 will be displayed. The error message itself says that a line has something missing. Unfortunately, this error message doesn't say on which line the error occurred (which wouldn't be useful, either) or which part is incompletely defined. It would be good if some identifier (TEC_PN) could be shown in the error message, but this concise syntax doesn't allow this. Another possibility is a loop, but that's something that could/should be avoided.

Then I remembered  that I once wrote about two undocumented variables that can be useful in designing screen triggers, :GOTO and :WRNMSG. These two variables were used in a similar context: if some condition occurred, then the appropriate warning message would be displayed and execution would transfer to the label stored in :GOTO. I wondered whether there is an undocumented variable, :ERRMSG ... and it turns out that there is. So my code is now

SELECT 353, TEC_PN INTO :ERRMSG, :PAR1 FROM TEC_LOADSIFORDERS WHERE USER = SQL.USER AND RECORDTYPE = '2' AND U1 = '' AND PARTNAME = '000' AND STRIND (TEC_PN, 1, 8) = 'ASSEMBLY';

with the error message containing both unchanging text and P1, thus killing two birds with one stone. The disadvantage of this code as opposed to a loop is that the above ignores the possibility that two lines might have the error. I don't have any statistics at hand but I imagine that this error occurs about once every two hundred lines, which is why I don't want to slow down the 199 lines that are correct.

Thursday 26 August 2021

An undocumented function in Priority - ATOR

There have been times, especially when I am programming interfaces that read external files and turn them into Priority data, that I wish that there was an Ascii2Real function, presumably ATOR, in the same way that there is an Ascii2Integer function (ATOI) and an Integer2Ascii function (ITOA). We have RTOA but no ATOR.

I've overcome this in the past by writing a subroutine that receives a string holding a real number (:QUANT) and returns a real number (:RQUANT).

SUB 879; :RQUANT = :TT = 0E9; SELECT STRPIECE (:QUANT, '.', 1, 1) INTO :TMP FROM DUMMY; :RQUANT = ATOI (:TMP) + 0.00000; SELECT STRPIECE (:QUANT ,'.', 2, 1) INTO :TMP FROM DUMMY; GOTO 102 WHERE :TMP = ''; /* This is the decimal part of the number. Figure out what the divisor has to be by the length of the number: e.g. num=3, div=1; num=30104, div = 5 */ :TT = 0.00000 + ATOI (:TMP); :DIV = 1; :I = 0; :LEN = STRLEN (:TMP); LABEL 101; :DIV = :DIV * 10; :I = :I + 1; LOOP 101 WHERE :I < :LEN; :RQUANT = :RQUANT + (:TT / :DIV); LABEL 102; RETURN;
Whilst this code works, I would prefer a predefined function in Priority. Today I was looking at a very old program that I did not write and saw the use of the ATOR function! As I know that this old program has correct syntax - and is used at least ten times a day - I felt fairly safe in replacing my subroutine with a simple function call, :RQUANT = ATOR (:QUANT). I also checked the syntax in WINDBI; lo and behold, this function turns a string like '3.1234' into a real number with six digits of accuracy, 3.123400.

Why would this useful function be undocumented, especially as it has existed in Priority since 2006? I won't even try to guess.

Wednesday 25 August 2021

Creating Excel files from Priority

A question that I see frequently asks how one can create an Excel file from Priority. There are two different answers to this question - one solution creates CSV files that can be read into Excel with no problem, and the other solution creates XLSM files that are native Excel files. The 'M' in this extension stands for Macro; one can include in these files a macro that executes on the data. Unfortunately, such files are often caught by email anti-virus filters, so sending them to third persons can be problematic.

For the purpose of this blog entry, I assume that there exists a report TEST_OUTPUT2EXCEL. It doesn't matter what the contents of this report are.

CSV method: this is very simple and creates, as expected, a CSV file. This means that the file does not allow any formatting of the contents.

:XLFILE = 'C:/TMP/TEST_OUTPUT2EXCEL.CSV'; EXECUTE WINACTIVF '-x', :XLFILE, '-R', 'TEST_OUTPUT2EXCEL'; :EMAIL = 'someone@somecompany.com'; MAILMSG 1 TO EMAIL :EMAIL DATA :XLFILE;
The first line defines a file name and the second line executes the TEST_OUTPUT2EXCEL report and sends the output to the denoted file. As far as I can figure out, the first parameter to WINACTIVF, i.e. '-x', means create an Excel/CSV file; the second parameter is the name of the output file; the third parameter is the report type: R = simple report, P = procedure, and the fourth and final parameter is the name of the report/procedure to be executed. The third line in the above snippet defines an email address and the fourth line shows a simple way to send a file to someone via email. The report itself does not have to be defined in any menu but the person executing the above code has to have permission for the report. This means that the report does not have to appear on a menu if it is executed solely by the task scheduler, but execution by 'normal' users requires the report to appear on a menu.

XLSM method: this is more complicated, naturally. There is a preliminary stage for this method that requires that the report appear on a menu: one has to create an Excel template for the report. One performs a right click on the menu option and chooses 'Create Excel template' then one runs the report. All the required formatting is done on the Excel file that is created, then one saves the file without changing its name, extension or location (the file name will be the name of the report and the extension XLT; the location will be ..\..\system\document). Once the template has been prepared, the report can be removed from the menu if it is not intended to be run manually. The template can be edited at a later stage, but again requires that the report appear on a menu.

The next stage is to find the template's number. There is no simple way of doing this; I opened WINDBI then typed 'SELECT * FROM EXCELTEMPLATES WHERE UDATE > SQL.DATE8 FORMAT'. This query will list all the templates that have been created today - there shouldn't be very many! The value of the EXEC field will be the EXEC number of the report. For the purposes of this blog, the template number will be 207, but I warn that this number has to be found empirically.

Once all the above has been done, the execution is simple:
:XLFILE = 'C:/TMP/TEST_OUTPUT2EXCEL'; EXECUTE WINACTIV '-R', 'TEST_OUTPUT2EXCEL', '-X', :XLFILE, 207; :EMAIL = 'someone@somecompany.com'; MAILMSG 1 TO EMAIL :EMAIL DATA STRCAT (:XLFILE', '.XLSM');
Here the order of the parameters seems more logical: type of report, name of report, Excel, name of output file (note that one does NOT give an extension) and number of template. This creates the file and adds the extension name; the extension has to be added automatically when sending the file by email, although as I noted at the beginning, such a file is likely to be stopped by an email anti-virus filter. This file will be formatted according to the format of the template file; one can also define a macro within the template file that is supposed to execute automatically when the file is opened (according to this page; I haven't tried this myself and there are likely to be problems executing macros on network files).

Update regarding how to find the template number: the template file will be in directory ..\..\system\document and its name will be something like TEST_OUTPUT2EXCEL_207.xlt, i.e. the name of the procedure or report for which the template was defined, followed by the number of the template. One can take the number directly from here instead of issuing a query via WINDBI.

Thursday 19 August 2021

Secret tip for tabular reports

In a regular report, the number of columns are fixed and the number of rows depends on the data that the report displays. In a tabular report, neither the number of columns or rows is fixed. In Excel, a tabular report is called a pivot report. The most common use of tabular reports is for displaying some datum over a year, where each column represents a month - it is easier to define what will appear in a column and have the tabular report framework do the hard work of displaying the correct data in each column than to define 12 columns and manually calculate the data per month. But a tabular report is not limited to displaying monthly data: it can be used for displaying data per something when one is not sure how many 'somethings' there will be - e.g. displaying sales per order type or branch.

One of my clients complained that a report that had displayed data from previous years was not working for the current year (this is not a report that I wrote). When I began investigating, I discovered that the report was tabular and that it was displaying data per department: the number of departments depended on the time period of the report (some departments only had data in one month). Some months would cause the report to display successfully, but one month in particular would cause the report to fail, displaying the error message "Too many X values". 

To me, this means that the report is trying to display too many columns. I know that there is a parameter used when defining tabular reports, "maximum table width", that is always 300 (I checked all the tabular reports defined in my system: some have the value 300 and some have 0) but does not seem to affect anything. My first suggestion was to shorten the department names: these are displayed in each column and maybe they are taking too much space. Changing the length of the names made no difference: the error message really means that there are too many columns to be displayed. As this is a problem with the data and not with the report, there wasn't much that I could do to help the client.

Showing initiative, the client eventually found a help page from Priority Software (the company) that deals with the error message; the solution is to change the value of the system constant TABLEMAXX (or TableMaxX, to make it clearer). This information was passed on to me for implementation. I first checked the 'system constants' form: no constant by this name appeared. The client is using a relatively old - but good - version of Priority and maybe the constant had been made visible in later versions. This left me no option but to add the above constant to the system constants table - something that is not possible via the system constants form. Thus I had to do this via the command interface, aka WINDBI.

INSERT INTO SYSCONST (NAME, VALUE) VALUES ('TABLEMAXX', 300);
INSERT INTO SYSCONSTA (NAME, ORIGVALUE) VALUES ('TABLEMAXX', 300);

The second line isn't required to change the value of the constant, but without it, the constant won't appear in the form.

After adding the constant, I ran the report for the problematic month, held my breath and ... the report worked! Magic! I then ran the report for an entire year and the data appeared. 

I got the impression that 300 is the maximum value for this variable; I hope that the client doesn't add any more departments for otherwise at some stage we are going to run into the same problem again.

And what is the unit of measurement for this constant: columns, characters or doodlebugs? At the moment, I don't care; all that matters is that the report now works and that the client is satisfied.

Edit from a few days later: The TableMaxX constant does not appear in the System Constants form in Priority 21, the current version, so even there it would have to be added manually. I suspect that the unit of measurement is columns.

Tuesday 17 August 2021

Defining a dynamic target form name for a form column: ZOOM1

This blog will describe a technique that is documented in the SDK but could be written more clearly, as always.

In previous installments, we have seen how to define a dynamic form target for a field in a report. For example, a report might display data from both purchase demands and purchase orders, where one field shows either the purchase demand number or the purchase order number. Clicking on this field will cause the appropriate form to be opened in order to see in detail the purchase demand or the purchase order.

The same mechanism exists in forms, but the implementation is different. I developed a few years ago a form that can display invoices of different types ('A', 'C' or 'F') and allows editing of certain fields in the invoice. Pressing F6 on the invoice number opens an intermediate form, not the specific form for the specific type (AINVOICES, CINVOICES or FINVOICES). This annoyed me sufficiently to seek the solution, which is ...

One defines a form column named ZOOM1, whose value should be the 'exec' number of the required form (not the name of the target form). In the case of the invoices, the exec number is in IVTYPES.EXEC, and so the column name will be EXEC and the table name IVTYPES. This can be seen in the screenshot on the left. This form column is used in the IVNUM field: in the form column extension sub-form, the target form name is set to ZOOM1.

ZOOM1 is predefined so one does not have to define it somewhere. Using 'ZOOM' without a numerical suffix does not work! Apparently ZOOM2-ZOOM9 are also defined, should one need more than one target form for a given form.

I have another private form that displays data about all the programs that I have developed: their name, date of creation, notes and most importantly, for whom it was developed. This form can display reports, procedures, forms and interfaces, so a dynamic target form name is required should one wish to open the chosen program (and frequently I do). In this case, however, I don't want the EXEC number of the program itself, but rather the EXEC number of the form that displays the program: if the program is a report then the form EREP should be opened, and if the program is a procedure then the form EPROG should be opened. As the table EXEC is already being used in this form, I will need to alias the table and use this in what Pascal would call a 'case' statement:

(form column) EXEC1.ENAME (expression) = (EXEC.TYPE = 'R' ? 'EREP' : (EXEC.TYPE = 'P' ? 'EPROG' : (EXEC.TYPE = 'F' ? 'EFORM' : (EXEC.TYPE = 'I' ? 'EINTER' : 'NULL'))))

Another form column will be required: its name will be ZOOM1 and its value EXEC1.EXEC. I later discovered that the condition EXEC1.TYPE = 'F' should be added in order to prevent duplicate rows in the form (apparently there are two entries in EXEC with the name EREP: one is a form and the other is a menu).

Tuesday 3 August 2021

Strange report bug

My company has a private table called TEST_BATCHES in which are stored 'batches', groups of work orders that are processed together in our new nesting machine. Generally one batch contains all the work orders required to manufacture 10 items of a given order line. We've recently discovered that the nesting machine produces a file that lists all the batches that it has processed in the past day and I have put this to good use, writing a procedure that extracts the relevant data from this file (not particularly easy) and reporting all the work orders as completed.

It happens that the file might have eight batches but only five are reported by this procedure, because three have already been reported manually. I wanted to add a little code to the procedure that would send me an email listing all the batches that are in the nesting file but not reported. I won't show how this is done because it is not particularly relevant; the unreported (or previously reported) batches are stored in a linked copy of STACK2 and this table is passed to the report that creates the email.

Every day I have been receiving an email listing all the batches that have ever been created (not that many!) which is clearly wrong. I checked the procedure, and STACK2 holds at most a few records at the end. I emptied the real STACK2 table in case this was causing problems; it wasn't. I checked the very simple report to see that there were no mistakes there - like marking TEST_BATCHES.BATCH with a question mark, so that all records would be shown in the report. No.

Today I had the idea of dumping the report in WINDBI. I was expecting to see the following

SELECT TEST_BATCHES.BATCHNAME FROM TEST_BATCHES, STACK2 WHERE TEST_BATCHES.BATCH = STACK2.ELEMENT AND TEST_BATCHES.BATCH > 0 AND 1 = 1;
What I saw, though, was something different that explained why the report contained all the records in TEST_BATCHES:
SELECT TEST_BATCHES.BATCHNAME FROM TEST_BATCHES WHERE TEST_BATCHES.BATCH > 0 AND 1 = 1;
Where had STACK2 gone to? After contemplating this for a few moments, I decided to add STACK2.TYPE to the report; lo and behold, dumping the report now showed the correct SQL query. Marking STACK2.TYPE as hidden removed STACK2 from the join statement again. As this field is always going to be zero (the procedure doesn't use it; it is already using STACK otherwise I would use this simple table), I compromised by marking the "don't display if zero" field. 

Tomorrow I expect to see only a few lines in the emailed report.

Considering this example, it's very much an edge case. I'm trying to think of other reports that might use only the key(s) of a table; most of my reports use STACK4 where both the key and data fields are displayed in the report. Otherwise there's no point in using STACK4 - I could just as easily use STACK. There might be a case of using only KEY1 and KEY2 of STACK8; I'll try and find such an example.

I did find an example of using only the keys of Stack8 and the above bug did not appear. I then created a new procedure with report that initially stored all the orders opened today in STACK.ELEMENT and then in STACK2.ELEMENT; the bug did not appear.

Moral of this story? Use the 'dump' function more often.

Wednesday 28 July 2021

Version control for Priority procedures (2)

I've been working a certain amount recently with load interfaces, causing me to bemoan that there's no easy way of documenting them. As opposed to procedure steps (that are stored in tables PROGRAMS and PROGRAMSTEXT), these are in tables LOAD and LOADTEXT. Despite this, it was fairly easy to extend the SAVEPROGVERSION that I wrote about a few months ago to include load interface code.

There are two new snippets that have to be inserted into the procedure's text; the first is at the beginning

:LOAD = 0; :TYPE = '\0'; /* 28/07/21: Get interface data */ SELECT EXEC.EXEC, EXEC.TYPE INTO :LOAD, :TYPE FROM EXEC, PROGRAMS WHERE PROGRAMS.EXECRUN = EXEC.EXEC AND PROGRAMS.PROG = :$.PRG;
whereas the second part is towards the end: I have 'bolded' the new lines
GOTO 1 WHERE :TYPE = 'L'; /* Interface */ INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT) SELECT 3 + SQL.LINE, '4', TEXT FROM PROGRAMSTEXT WHERE PROG = :$.PRG; GOTO 2; LABEL 1; INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT) SELECT 3 + SQL.LINE, '4', TEXT FROM LOADTEXT WHERE LOAD = :LOAD; LABEL 2; EXECUTE INTERFACE 'TEST_PROGVERSION', SQL.TMPFILE, '-L', :$.GEN;

Monday 19 July 2021

Choose-field trigger for procedural parameters

One of my clients wanted a report that would compare the price of the same part at different dates. The comparison part is simple (although the report should include dynamic column titles, an interesting technique) but choosing the price lists is complicated: a supplier can have several price lists that are active concurrently, and a price list does not have an easily choosable primary key.

The ideal user interface would be supplier number and two dates, where the dates come from the supplier's price lists. This requires what could be termed a conditional choose-field trigger, as the trigger's code is dependent on the supplier's number. This is, to be blunt, very difficult to achieve in Priority.

I wrote on this topic a few years ago and quoted text from the SDK that purports to explain. I'll quote the text again here so that I might comment upon it: You can also write a specific CHOOSE-FIELD or SEARCH-FIELD for the procedure. Your trigger can contain references to any input value specified by the user within the same [emphasis mine] procedure step. For instance, if the procedure step contains an input parameter called CST, its value will be stored in the :PROGPAR.CST variable. This is useful, for example, if a given procedure step contains an input column for a Sales Rep and another input column for a Customer, and you want the Choose list for the latter column to display only those customers that are associated with the specified sales rep.

It would have helped greatly had there been presented a detailed example of this usage, or at least to give an example of a procedure that uses this technique. I tried to write a trigger using this technique; I assumed that PROGPAR was the name of the procedure so tried to do something similar with TEST_COMPAREPRICELIST.SUP but I received an error message.

In the end, I wrote a slightly complicated report that displays the text of a given trigger for parameters in procedures; using this report, I was able to see that standard report INCOME_BUDGETS uses this technique in step 4 for the parameter VR -
SELECT VERDES,VERCODE,VALID FROM BUDVERSIONS WHERE GL = (SELECT GL FROM GENLEDGERS WHERE GLNAME = :PROGPAR.GLF AND GL <> 0) ORDER BY 3 DESC,2;
From this code, it can be seen that :PROGPAR is the name of a variable and not the name of a procedure; GLF is a parameter that is connected to a table. But this is not as simple as it seems: GLF receives a value that is calculated in a previous stage. Is this necessary?

After banging my head against a brick wall for a while, I finally found the correct combination of parameters that will allow me to choose a supplier then receive a list of dates for the supplier's price lists. The procedure uses a regular 'line' parameter, SUP, that is defined to receive a supplier's number. The important fact to remember is that in the choose-field trigger, :PROGPAR.SUP is the value that is entered in the SUP parameter (in this case, the supplier's external number), not the internal number of the supplier and not the name of the table that is linked to the parameter.

The choose-field trigger for DT1 that actually works is as follows. Converting the dates to integers allows them to be sorted in reverse order (newest first); otherwise they get sorted in ASCII order which is incorrect.
SELECT DTOA (SUPPRICELIST.SUPPLDATE, 'DD/MM/YY'), DTOA (SUPPRICELIST.SUPPLDATE, 'DD/MM/YY'), ITOA (SUPPRICELIST.SUPPLDATE) FROM SUPPRICELIST, SUPPLIERS WHERE SUPPRICELIST.SUP = SUPPLIERS.SUP AND SUPPLIERS.SUPNAME = :PROGPAR.SUP ORDER BY 3 DESC;

Wednesday 19 May 2021

Conditional opening of a form from a report

In what might be termed a 'normal' report, one can cause a specific form (screen) to be opened with the current field in the report by pressing F6; this is default behaviour, but can be over-ridden by defining a specific form in the Target Form Name column of the Report Column Extension sub-level of the Report Columns form (this is documented). Thus for a part, normally the form LOGPART will be opened, but this can be changed to open a different form such as FLUSHERROR.

In more complicated reports that display different types of document in the same field (such as an order number or a delivery note number), the above mechanism does not work and so one has to define a special field that contains the name of the form to be opened. I wrote about this a few months ago but cleverly managed not to document how the given field is marked; this used to be undocumented, but recent versions of the Priority SDK devote a section to this, although as usual this is somewhat unclear.

In order to achieve this, for the report column in question, record the following settings in the Link/Input tab of the Report Columns-HTML Design sub-level of the Report Columns form:

  • Link/Input Type = P
  • Return Value Name (:HTMLACTION) = _winform
  • Return Value Column# (:HTMLVALUE) = the number of the column containing the ENAME of the target form. Note: The column with the ENAME of the target form must have a Sort value.
  • Internal Link Column# = same as :HTMLVALUE above.

I recently had to add a new wrinkle to the above. The report in question sometimes displayed part numbers but could also display text labels (the report in question is based on exploding a BOM but then adding values for variables such as 'shipping' at the end of the report); I wanted that clicking on the part name would bring up the LOGPART form, but that clicking on a text label would do nothing.

One can't use the standard specific form in the Target Form Name column as the given field is an expression. Using the _winform approach means that the 'column containing the ENAME' should be defined as = (STACK4.INTDATA = 0 ? 'LOGPART' : <something>) and a further hidden field be defined as EXEC.TYPE = 'F'. The '<something>' is a place holder intended for the text labels; I wasn't too sure what to place there at first.  

Initially I replaced <something> by ' ' (i.e. the null string), but then these extra lines did not appear as there is no entry in the EXEC table for ENAME = ' ' and TYPE = 'F'. I then replaced the empty string with 'F' (as I noted in a previous blog, there is a dummy form called F); this worked in the sense that the extra lines appeared, but clicking on the text label caused this dummy form to appear, which I did not want. My next attempt was to replace the condition on the EXEC.TYPE field with  = (STACK4.INTDATA = 0 ? 'F' : ''); this worked properly (i.e. no form opened when clicking on the text label) but seemed too fussy.

Glancing through the SDK one more time, I found a more elegant solution: EXEC.TYPE can be equal to 'F' and ENAME becomes = (STACK4.INTDATA = 0 ? 'LOGPART' : 'NULL'). This is documented  both in the 'Forms' chapter and the 'Reports' chapter of the SDK (to disable automatic access from a given column, specify the NULL form as the target form in the Form Column Extension form).

I learn something new every day.

Wednesday 14 April 2021

Sonraw

The external program SONRAW is invaluable: it takes a list of parts and builds an 'exploded' bill of materials for each part. Unfortunately, there are also a few very important 'gotchas' of which we have to be aware. These arise due to what might be considered a poor decision by the original system developers: the output of SONRAW is stored in the table PARTARC, which is the same table in which are stored the default BOM data.

A brief explanation 
Putting this another way, the 'normal' table PARTARC stores a parent part and its direct sons, i.e. to a depth of 1. Let's say that there is a parent part A and has sons B, C and D: there will be three rows in the PARTARC table where the parent is A. But B, C and D might have sons of their own (eg B1, B2, B3, C1, etc). In this case there will be three rows in PARTARC where the parent is B (the sons will be B1, B2 and B3). If it's not yet clear, a part can appear both in the 'part' field of PARTARC, when it is a parent, and in the 'son' field, when it is a son.

What does SONRAW do? If there are three rows, A/B, A/C and A/D, and another three rows B/B1, B/B2 and B/B3, the resulting linked PARTARC table will also contain six rows, but these will be A/B, A/B1, A/B2, A/B3, A/C and A/D although not necessarily in this order. There is a field in PARTARC that stores the depth of the son; for A/B, the depth will be 1, but for A/B1, the depth will be 2. This data enables one to write a relatively simple report that displays the recursive tree as a list.

How does one use SONRAW? 
One can use SONRAW as a separate procedure step (see procedure PARTTREE, stage 40) or one can call SONRAW within an SQLI step; it's slightly more than a matter of preference - I find it easier to call SONRAW within an SQLI step. Whichever way, there must be five parameters passed; these are 

  1. a table linked to PART - this will contain the parent parts
  2. a date - I think that this is useful when installations have different versions of BOMs. I don't have this requirement and so always pass today's date
  3. a table linked to PARTARC - this will contain the output
  4. an integer between 0 and 4 - this indicates which option to use (look at PARTTREE, stages 10-30). I normally use 0; 2 will include phantoms in the output (?)
  5. a flag that can either be empty or '-N'; the 'N' probably stands for 'normal'. Leaving the flag empty will cause the output to be partial; see PARTTREE stage 10 for 'an explanation'
My normal code for using this is as follows
LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0; /* the parameter 2 means show phantoms as well as standard parts */ EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 2, '-N';
What comes after this depends on the application. One can unlink the tables, then pass PARTARC to a report that prints out the tree. I often have a cursor that runs after the SONRAW code in order to extract only purchase parts (and writing this now, I realise that the fourth parameter to the call above is '2'; according to PARTTREE, this means that only R parts will be included in the output).

So where are the pitfalls?

There are several fields in the PARTARC table that appear (unsurprisingly) in the PARTARC screen (and also PARTARCONE) that SONRAW trashes. Prior to SONRAW, the field PART contains the internal part number of a given parent (e.g. B in the simple example about), SCRAP will contain the percentage scrap of part B1 (the calculation of the scrap is not straightforward, but that's another story),  SONREVNAME contains the internal number of the son revision chosen for this line, and SONQUANT contains the quantity of B1 needed for one unit of B. But after SONRAW, PART contains the internal part number of the top part (i.e. A), COEF will contain the quantity of B1 needed for one unit of A, and SONQUANT will contain the internal part number of the original parent (i.e. B). SCRAP and SONREVNAME contain values but they're not what they were prior to running SONRAW (and I don't know what they are).

PARTARC.SONACT also gets trashed but in this case, the new value after SONRAW is very useful: it holds a sequential index to the lines in PARTARC (i.e. 1, 2, 3). This gets used in any report based on PARTARC as it presents the lines in the correct, hierarchical, order. But this index has added value: it can be used as an index into an auxiliary table (e.g. STACK4) that holds additional data for each line in the exploded tree.

In the past, I discovered that I have to write some obscure code in order to obtain the scrap and revision number, as follows.

DECLARE C3 CURSOR FOR SELECT PARTARC.SON, PARTARC.COEF, PARTARC.VAR, PARTARC.SONACT, ROUND (PARTARC.SONQUANT) FROM PARTARC, PART ORIG WHERE PARTARC.SON = ORIG.PART AND PARTARC.PART = :PART AND PARTARC.SON > 0 ORDER BY 4; OPEN C3; GOTO 300 WHERE :RETVAL <= 0 ; LABEL 210; FETCH C3 INTO :SON, :COEF, :VAR, :SONACT, :SQ; GOTO 299 WHERE :RETVAL <= 0; :SCRAP = 0.0; :REV = 0; SELECT SCRAP, SONREVNAME INTO :SCRAP, :REV FROM PARTARC ORIG WHERE PART = :SQ AND SON = :SON; INSERT INTO STACK4 (KEY, REALDATA, INTDATA) VALUES (:SONACT, :SCRAP, :REV);

The final lines above show :SCRAP and :REV being stored into an auxiliary table that gets passed to the report along with the linked PARTARC table, so that the report can show the necessary values. The cursor uses ROUND (PARTARC.SONQUANT) as SONQUANT is a real number, but an internal part number is an integer, of course.

I wasted about an hour today because of this problem with SONREVNAME. It's not a field that I use myself, but a company for whom I am writing an exceedingly complicated BOM report needs it. Eventually the penny dropped as to the problem: I already had the code for obtaining the correct value of SCRAP so I adopted it for this new program.

Monday 15 March 2021

Version control for Priority procedures

Someone asked how to maintain previous versions of procedures written in Priority; I answered that I had written a procedure that outputs the text to a text file that I store in a given directory. Someone else pointed out that Priority does have support for procedure versions, but when I looked into this, it appeared that this support is connected directly to a module that prepares 'update' files. These files are used when one develops on one machine and wants to install on another machine. I don't think that this was what the original question was about.

But this reply started me thinking: how hard would it be to develop a table (or two) that stores program text? The answer: fairly easy. I needed to define a table, a form and an interface. Procedures are maintained via a form called EPROG that is based on the EXEC table; this form has a child form called PROG that presents the procedure stages, and the text for each stage is shown in a form called PROGTEXT (not that this is important). The base table of PROG is PROGRAMS, whose primary key (or in Priority-speak, its auto-unique A index) is PROG. So I would need to define a table whose key included PROG.

The definition of my new table TEST_PROGVERSION is as follows

Field name Type Description Key
TPV Int Auto-increment A
PROG Int Pointer to procedure U
VERSION Int Text version U
CURDATE Date Date of version

On the basis of this table, I defined a form with the same name; this form is a child form of the PROG form, and PROG in my form is equal to :$$.PROG, i.e. the PROG is the parent form. TPV and PROG are hidden fields, and the form is sorted by VERSION descending, i.e. the newest version is first. Originally I defined the form to be query only (i.e. type Q) but this led to problems with the interface, so the form is now of type N (no deletions) and the two visible fields are read-only.

After I had a table and form, I used the built-in procedure for defining a text form; this created a table and form with the name TEST_PROGVERSIONTEXT and added this new form as a child form to the TEST_PROGVERSION form.

An interface has to be designed in order to insert data into these tables; this is required because of the text form that has special requirements, otherwise I could simply insert the required data into the tables. The interface has to be four levels deep: first EPROG, then PROG, then TEST_PROGVERSION and finally TEST_PROGVERSIONTEXT. 

How does one store a version? I wrote a procedure that appears as a direct activation from form PROG. Here it is
LINK PROGRAMS TO :$.PAR; SELECT PROG INTO :$.PRG FROM PROGRAMS WHERE PROG > 0; UNLINK PROGRAMS; :VERSION = 0; SELECT MAX (VERSION) INTO :VERSION FROM TEST_PROGVERSION WHERE PROG = :$.PRG; :PAR1 = '$'; LINK GENERALLOAD TO :$.GEN; ERRMSG 1 WHERE :RETVAL <= 0; INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT6) SELECT 1, '1', EXEC.ENAME FROM EXEC, PROGRAMS WHERE EXEC.EXEC = PROGRAMS.EXEC AND PROGRAMS.PROG = :$.PRG; INSERT INTO GENERALLOAD (LINE, RECORDTYPE, INT1) VALUES (2, '2', :$.PRG); INSERT INTO GENERALLOAD (LINE, RECORDTYPE, INT1, DATE1) VALUES (3, '3', :VERSION + 1, SQL.DATE8); INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT) SELECT 3 + SQL.LINE, '4', TEXT FROM PROGRAMSTEXT WHERE PROG = :$.PRG; EXECUTE INTERFACE 'TEST_PROGVERSION', SQL.TMPFILE, '-L', :$.GEN; GOTO 1 WHERE NOT EXISTS (SELECT 1 FROM ERRMSGS WHERE USER = SQL.USER AND TYPE = 'i'); EXECUTE WINACTIV '-R', 'INTERFACEERR'; LABEL 1;
UNLINK GENERALLOAD;
The entire process would have taken about twenty minutes, had I foreseen the problem of defining the new form not of type Q but of type N. A further waste of time happened when I looked at the text that was stored in the text form: it was aligned to the right. I tried to insert a line into the GENERALLOAD table to correct this, but I must have forgotten the correct commands. In the end, I realised that I can leave the text as it is, and if I need it left aligned, then I can use the command in the in-built HTML editor of the form that performs the switch and keeps the result.

Sunday 7 February 2021

Order of fields can matter in a form interface

I wrote several months ago  about how the order of tuples can matter; what I didn't know at the time was that the order of the fields in an interface can also be critical. The picture below is an example of a form interface, not the one that is discussed underneath the picture.

It happened that I was asked to create an interface for invoices where the invoice had to have a specific project number; as I had already developed the interface, I simply added the project number at the end of the list of fields. Although theoretically this should be sufficient, it turned out that each customer is connected to some project that is not the project that should appear in the invoices. I explain this to myself that entering the customer number causes the customer's project to be entered into the 'project' field of the screen, and that the latter project number insertion fails because there is already a value there. 

The interface worked as my customer desired when the specific project number was the first field to be entered; again, the insertion of the customer's project failed because there was already a value in the 'projects' field.

So beware: sometimes the order of fields in an interface can be critical!

Friday 5 February 2021

A safe method of extracting 'son' parts from a bill of materials

I am often asked to prepare a type of report that requires descending through a part's bill of materials and doing something with the 'son' parts, for example showing the cost of raw materials for each part and the cost of work for that part. Such reports (or more correctly, the procedure that prepares the data to be shown in the report) use the external SONRAW procedure along with temporary PART and PARTARC tables. It also happens that the data for each primary part is to be stored separately; for example, such a report showing raw materials for an order could display the data in two different ways: either all the parts required for the complete order, or all the parts required for each line in the order. In the first case, SONRAW would be called once for the order, whereas the second case might require SONRAW to be called in a loop.

I would start such a procedure with the lines

LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0;
Then parts would be entered into the linked table in the following manner
INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) FROM ORDERITEMS, PART ORIG WHERE ORDERITEMS.PART = ORIG.PART AND ORDERITEMS.ORD = :ORD AND ORIG.TYPE = 'P' GROUP BY 1, 2; EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 0, '-N';
The above example assumes that the code is run in a loop, each time for a different order (represented by :ORD). After SONRAW, something would be done with the 'exploded tree' that is stored in the linked PARTARC table.

So far, so good. So where is the problem? It would often happen that I would add two lines before the 'INSERT INTO PART' statement -
DELETE FROM PART; DELETE FROM PARTARC;
These are linked tables, right? And so the deletion takes place on these linked tables. And yet somehow, whilst running a procedure that uses this code yesterday, the real PART and PARTARC tables were deleted! I have run such code thousands of times and there has never been a problem with it, but something must have happened to prevent the linkage and so the deletion removed the real tables. 'Fortunately' this was at the beginning of the day, so the previous day's backup was sufficient to restore the missing data.

As this is the second time in a month that I have managed to delete real tables, I am now trying to adopt the attitude that the DELETE statement MUST NEVER BE USED!!! So how can I ensure that I have empty temporary tables before the insertion of new parts? By using UNLINK and REMOVE:
UNLINK AND REMOVE PART; UNLINK AND REMOVE PARTARC; LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0; INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) ...
This looks like code waiting to be optimised - surely unlinking and then relinking a table will take time - but this is safe code, something much more important than saving a few seconds (as opposed to shutting down a company for several hours whilst data is restored - I claim that I'm checking the disaster recovery procedure).

What does the SDK have to say on this topic? Use the AND REMOVE option if you wish the linked file to be deleted when it is unlinked. This is necessary when working with loops, particularly when manipulations are carried out on the data in the linked file. If you do not remove the linked file, and the function using LINK and UNLINK is called more than once, you will receive the same copy of the table during the next link. So, if you want the LINK command to open a new (updated) copy of the table, use UNLINK AND REMOVE. That's fairly clear for a change. So as modern day Americans say (in English that would have thrown me out of school), "my bad".

Thursday 4 February 2021

A method of displaying in a report values that can come from one of several tables

It's not exactly secret that a report in Priority can display in one field a value that can come from one of  several tables, but this useful functionality is hidden so well that it might be that many programmers never come across it. An example of what I mean would be a report that displays sales to a customer, where a sale is defined as either a delivery note (DOCUMENTS) or a direct invoice (INVOICES) - specifically screens DOCUMENTS_D and AINVOICES. Such a report would show either DOCUMENTS.DOCNO or INVOICES.IVNUM in the 'document' field. I'm sure that there is a standard report that does this, but I couldn't find it, so instead look at the standard report DISTRLISTDOC, field #120.

I often use this technique in a procedure, normally using STACK8: in one of the key fields I would store the document's A value (i.e. DOCUMENTS.DOC or INVOICES.IV), and in the second key field, I would store a value to distinguish between the two, i.e. 1 means DOCUMENTS and 2 means INVOICES. When it comes to the report, I have to add several calculated fields:

DOCUMENTS.DOC ... = (STACK8.KEY2 = 1 ? STACK8.KEY1 : 0) INVOICES.IV ... = (STACK8.KEY2 = 2 ? STACK8.KEY1 : 0) (document number) (STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) (document date) (STACK8.KEY2 = 1 ? DOCUMENTS.CURDATE : INVOICES.IVDATE)

This isn't particularly difficult to do when there are only two tables involved, but sometimes there are more, and these expressions become hairy. I was working on a report that can display data from these screens: DOCUMENTS_A, DOCUMENTS_D, DOCUMENTS_P, YINVOICES, ORDERS and PORDERS. It's a report that displays documents connected to a project, hence the variety of data. The 'selector' code (e.g. STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) has to be written at least three times (once for the document number, once for the data, once for the screen that should be opened for the document and possibly once for the customer) and of course the possibility of error grows very quickly.

The other day it occurred to me that I could remove this complexity with a simple change: instead of storing the document number and type in the STACK8 table, I could use a few fields that exist in STACK4, namely 'docno', 'datadate' and 'chardata'. The primary key is a simple incrementing variable bearing no information; DOCUMENTS.DOCNO, INVOICES.IVNUM, ORDERS.ORDNAME, etc are stored in the 'docno' field; DOCUMENTS.CURDATE etc are stored in the 'datadate' field, and DOCUMENTS.TYPE (or similar) is stored in the 'chardata' field.

INSERT INTO STACK4 (KEY, INTDATA, DETAILS, DOCNO, REALDATA2, CHARDATA, DATADATE) SELECT :LINE + SQL.LINE, :KEY, :DETAILS, DOCUMENTS.DOCNO, (-1.0) * SUM (PART.COST * REALQUANT (TRANSORDER.QUANT)), DOCUMENTS.TYPE, DOCUMENTS.CURDATE FROM DOCUMENTS, TRANSORDER, PART, DOCUMENTSA, PART MPART ...
The report that receives the data no longer has to 'decode' the document number: it simply displays STACK4.DOCNO; the same for the date. The only field that is not simplified is that the one that chooses which screen should be displayed if one clicks on the document number. I suppose that I could store the appropriate EXEC value in the stack; maybe this will come as a future improvement. The reason why I hesitate to do this is that there is no link between a table of data and the screen in which it is to be displayed. The way to do this is to add several queries such as
:EXEC = 0; SELECT EXEC INTO :EXEC FROM EXEC WHERE ENAME = 'ORDERS' AND TYPE = 'F'; ---- or ---- :EXEC = 0; SELECT DOCTYPES.EXEC INTO :EXEC FROM DOCTYPES, DOCUMENTS WHERE DOCTYPES.TYPE = DOCUMENTS.TYPE AND DOCUMENTS ....
This is a rare instance of something that is harder to do in a procedure than it is in a report. At the moment, the value in 'chardata' serves to identify which screen to display, in the following manner:
EXEC.ENAME = (STACK4.CHARDATA = 'O' ? 'ORDERS' : (STACK4.CHARDATA = 'Y' ? 'YINVOICES' : (STACK4.CHARDATA = 'A' ? 'DOCUMENTS_A' : (STACK4.CHARDATA = 'D' ? 'DOCUMENTS_D' : (STACK4.CHARDATA = 'P' ? 'DOCUMENTS_P' : (STACK4.CHARDATA = 'Q' ? 'PORDERS' : 'F'))))))

I should explain the final screen name, F. The report display some data that is not connected to a screen; It happened that when the final line was 'PORDERS' : '')))))), these data were not displayed. Priority has a screen called 'F' that is based on the DUMMY table and seems ideal for this purpose.

Tuesday 2 February 2021

Filtering on non-key fields

Nearly five years (!) have passed since I last wrote on this topic (then called 'zero values in Priority') on the other blog. I will explain again: normally, when one adds parameters to procedures, the parameter is based on a field that is guaranteed to have a value (e.g. customer number in an order, order type or similar). But it often happens that one wants a parameter on a field that is not guaranteed to have a value - today's blog will discuss filtering customers by their 'customer type' field that is not mandatory.

The original post showed how to filter successfully when a cursor is used, but I never found a satisfactory solution to filtering when tuples are saved using the 'insert/select' syntax. Looking at the old code, an answer popped into my head.

The first part is the same as ever:

:LINKED = :UNLINKED = 0; SELECT COUNT (*) INTO :UNLINKED FROM CTYPE; LINK CTYPE TO :$.CTP; ERRMSG 1 WHERE :RETVAL <= 0; SELECT COUNT (*) INTO :LINKED FROM CTYPE;

But the way that this is used is different

INSERT INTO STACK4 (KEY, INTDATA, REALDATA) SELECT ORDERS.CUST, CUSTOMERS.CTYPE, SUM (ORDERS.DISPRICE) FROM ORDERS, CPROFTYPES, BRANCHES, CUSTOMERS, CTYPE WHERE ORDERS.ORDTYPE = CPROFTYPES.CPROFTYPE AND ORDERS.BRANCH = BRANCHES.BRANCH AND ORDERS.CUST = CUSTOMERS.CUST AND CUSTOMERS.CTYPE = CTYPE.CTYPE AND ORDERS.ORDSTATUS <> -6 AND ORDERS.CURDATE BETWEEN :$.FDT AND :$.TDT GROUP BY 1, 2 HAVING SUM (ORDERS.DISPRICE) >= :$.NUM; GOTO 1 WHERE :LINKED = :UNLINKED; /* The user requested specific values of CTYPE, so remove from STACK4 any tuples with intdata (ie ctype) = 0 */ DELETE FROM STACK4 WHERE INTDATA = 0; LABEL 1;

In other words: collect and store in stack4 all the data as one would normally do. If the linked CTYPE table contains all the values, then all the suitable customers are selected: nothing new here. If only certain values are chosen in CTYPE, then all the customers with those values along with the customers with no value (i.e. CUSTOMER.CTYPE = 0) are chosen (selected).

After the insertion statement has finished, there is a check to see whether only specific values of CTYPE were chosen, and if so, then all tuples with the value 0 are deleted from the stack.

Sunday 24 January 2021

Even more on 'a safer method': possibly the safest method of all

One comment to these blogs basically said that the PAR parameter in the initial INPUT step doesn't have to be the same as the implicit PAR parameter in HTMLCURSOR. Bearing this in mind, I wrote a new procedure that has the following intermediate step between the INPUT and HTMLCURSOR steps.
SELECT :$.PAR INTO :OLDPAR FROM DUMMY; LINK CUSTOMERS OLD TO :OLDPAR; SELECT SQL.TMPFILE INTO :$.PAR FROM DUMMY; LINK CUSTOMERS TO :$.PAR; DECLARE CUR CURSOR FOR SELECT CUST, CUSTNAME FROM CUSTOMERS OLD WHERE CUST > 0; OPEN CUR; GOTO 300 WHERE :RETVAL <= 0; LABEL 100; FETCH CUR INTO :CST, :CNAME; GOTO 200 WHERE :RETVAL <= 0; LOOP 100 WHERE RSTRIND (:CNAME, 1, 1) <> '0'; INSERT INTO CUSTOMERS (CUST, CUSTNAME) VALUES (:CST, :CNAME); LOOP 100; LABEL 200; CLOSE CUR; LABEL 300; UNLINK CUSTOMERS OLD; UNLINK CUSTOMERS ;

This actually works! Obviously I'm taking an arbitrary range of customers but one that meets the criterion of having a number that ends in 0 does have orders and the procedure does create a report of orders for this specific customer. In the real program, a series of tests are run, and only customers that pass these tests are inserted into the linked table.

This method is the safest and easiest - no deletions, no external tables and one person can run the report more than once concurrently with different parameters.

One reason why my previous tests using STACK as an intermediate table failed is that the new record has to be inserted with both the A and U keys (cust and custname respectively). STACK only has one field into which I was inserting the A key; looking at this now in retrospect, it's clear that this wasn't sufficiently as every tuple has to have a value for the U key.

Saturday 16 January 2021

Still more on 'a safer method'

Yesterday's blog included this statement HTMLCURSOR then becomes SELECT CUST FROM CUSTOMERS WHERE CUST > 0 AND TEST_WWWFLAG = 'Y'. Problem solved.

Looking at it this morning, this statement is not strictly true: the problem is solved as long as only one person is running the procedure at any given time.

A better way of solving this would be to define a new table, (say) TEST_CUSTUSER that would have two fields CUST and USER that together comprise the table's key. Then instead of writing UPDATE CUSTOMERS SET TEST_WWWFLAG = 'Y', one would write

DELETE FROM TEST_CUSTUSER WHERE USER = SQL.USER; INSERT INTO TEST_CUSTUSER (CUST, USER) SELECT CUST, SQL.USER FROM CUSTOMERS /* may or may not be linked */ WHERE CUST > 0;
And of course HTMLCURSOR has to be revised to become
SELECT CUST FROM TEST_CUSTUSER WHERE CUST > 0 AND USER = SQL.USER;
This still is not ideal: whilst two users can now run the procedure simultaneously, a given user cannot run the procedure with different inputs simultaneously. SQL.DATE would not work as the user could invoke the procedure twice with different parameters within the same minute; also SQL.DATE at the beginning of the procedure would probably not be the same as SQL.DATE at the HTMLCURSOR stage. 

A possible solution to this would be to add another field to the table, GUID. At the beginning of the program one would add the statement :$.MYG = SQL.GUID, then add this field to the delete and insert statements. I suspect, though, that HTMLCURSOR might not recognise this global variable ... and indeed it doesn't. Even had this worked, one doesn't need to use the SQL.GUID function (returning a random 32 char string); one could simply define an integer whose maximum value would be established on entering the procedure, then incremented for the current procedure's run. But again, there would be no way of using this field in HTMLCURSOR. Even storing this maximum value in the LASTS table and accessing it from there in HTMLCURSOR wouldn't work - LASTS is not on a per-user basis, and even if it were, the second run might finish before the first run, so the value in LASTS would not be the correct one.
 
Any ideas? 

Friday 15 January 2021

A safer method of choosing tuples with complex conditions within an HTML document

Continuing this unintended series of blogs about choosing customers for display within an HTML document when the conditions for the inclusion of a customer are complex .... We've seen that deleting records from the linked CUSTOMERS table can be dangerous, because in certain circumstances that table might not be linked.

There was a very interesting comment left on that entry that suggested using a temporary table such as STACK: one could link this table, load it with records from CUSTOMERS, test the values and then delete from the STACK table. It wouldn't matter too much if this table were not linked. Whilst this technique can work in complex procedures, it won't work in an HTML document. The SDK states in the section 'Basic commands' in chapter 5 that HTMLCURSOR creates a linked file that holds the records selected in the PAR input parameter (emphasis mine). I have always wondered how this command 'knew' from which table to select, and now I know. Linking STACK to :$.PAR then inserting records into this linked table from CUSTOMERS didn't work in my tests (the insertion worked fine, but the procedure did not display any documents).

Here is a much safer method, although it requires some preparation. Add a field to the CUSTOMERS table and call it something like TEST_WWWFLAG with type CHAR, length 1. Set the value of this field to 'Y' in the step that checks the suitability of records for inclusion before the cursor iterates over the records. Thus one can simply write UPDATE CUSTOMERS SET TEST_WWWFLAG = 'Y'. As this should be the only place in Priority that accesses this field, it doesn't matter whether the table is linked or unlinked. At the end of the cursor code, replace the infamous DELETE statement with this: UPDATE CUSTOMERS SET TEST_WWWFLAG = '\0' WHERE CUST = :CST. Thus any record that fails all the checks will have this field empty.

HTMLCURSOR then becomes SELECT CUST FROM CUSTOMERS WHERE CUST > 0 AND TEST_WWWFLAG = 'Y'. Problem solved.

If anyone is wondering: fortunately the deletion of customers happened at the end of the day so not many people were inconvenienced. After a series of phone calls, the database administrator was able to restore the missing table. I checked that the data had been restored successfully by accessing customer orders that had been entered that day then checking their status log to see until when there were changes. Even so, there was one check that had not occurred to me: it might have been that a customer had been added after the backup that was restored; in this case, there would be no record in CUSTOMERS but there would be in CUSTOMERSA. A quick check revealed that indeed there was such a record, so I quickly added a new customer; this received the same autoinc key number as the final record in CUSTOMERSA. In the customers screen, I saw that this customer had a linked contact person (!) , causing me to check whether there was a sales opportunity for this unknown customer. Indeed there was, enabling me to see who had opened the customer; from this person I obtained the customer's name. So that episode is safely behind us. 

Tuesday 12 January 2021

NFILE: advantages and disadvantages

I am sure that I have written about NFILE at some time but I can't find the reference and so I'll start from the beginning. [Edit: I found the entry which now has the NFILE label]

Let's say that one has a procedure that accepts a list of customers as its input - let's even say that it's the HTML document that I worked on yesterday. When one enters a list of customers, the linked customers table will include only those customers, obviously. When one writes * as the input, all the customers from the unlinked customer table have to be copied into the linked customers table; this takes time! 

Enter NFILE: this was introduced in version 17 IIRC. The documentation says: Select NFILE if the linked file comprises a group of records and you want the link table to remain empty when the user enters * or leaves the field empty. How does one use this? By writing code as follows

LINK CUSTOMERS TO :$.CST; /* NFILE */ ERRMSG 1 WHERE :RETVAL <= 0; GOTO 1 FROM CUSTOMERS WHERE CUST > 0; UNLINK CUSTOMERS; LABEL 1;

This saves time in copying records from the unlinked table to the linked table. Tables that benefit from this are PART, DOCUMENTS, INVOICES and CUSTOMERS. Maybe a few others.

Now back to the HTML document that I described yesterday. I realised that I could filter unwanted customers using a cursor and a series of expressions such as

GOTO 999 WHERE NOT EXISTS (SELECT 1 FROM BRANCHES WHERE BRANCH > 0); LINK CUSTOMERS TO :$.CST; LINK BRANCHES TO :$.BRN; DECLARE C15 CURSOR FOR SELECT CUST FROM CUSTOMERS WHERE CUST > 0; OPEN C15; GOTO 999 WHERE :RETVAL <= 0; LABEL 100; FETCH C15 INTO :CST; GOTO 800 WHERE :RETVAL <= 0; GOTO 200 WHERE :$.A3 <> 'Y'; /* No need to check price quotes if not wanted */ :HASDATA = 0; SELECT 1 INTO :HASDATA FROM CPROF, CPROFA, BRANCHES WHERE CPROF.PROF = CPROFA.PROF AND CPROFA.BRANCH = BRANCHES.BRANCH AND CPROF.CUST = :CST AND CPROF.PDATE >= :$.DAT; LOOP 100 WHERE :HASDATA = 1; .... LABEL 700; /* If we are here, then the customer has failed all the tests so delete it from the linked table */ DELETE FROM CUSTOMERS WHERE CUST = :CST; LOOP 100; LABEL 800; ...

This code is repeated, once for each type of record - sales opportunities, price quotes, orders and projects. The first two lines check whether any branches were passed as a parameter: there's no point in checking anything if there are no branches. I added all the necessary code, ran the program using one specific branch without designating customers: the procedure took a few minutes to sort itself out then showed the output: as designed, only customers with data from the chosen branch were shown.

I wondered how I could improve the procedure's speed - the obvious step would be to define CUSTOMERS as type NFILE instead of FILE. There would be no need to copy customers to the linked table which would save time. So I added this change, ran the program ... and then realised with a shudder that the DELETION WAS NOT FROM THE LINKED TABLE BUT FROM THE REAL TABLE!!! The first code snippet that I showed unlinks CUSTOMERS; thus the DELETE statement does not work on the linked table but rather the real table.

I reached for the power switch (figuratively) as soon as possible but it was too late. A few minutes later, someone called to ask whether customers had disappeared. My code has managed to delete about 16,000 customers out of 24,000 in a few minutes. Straightaway I phoned the network manager who took Priority off line, and then we turned to our database administrator who started restoring the damaged table.

Bottom line: be wary of NFILE! Normally it can help but sometimes it is dangerous.