Sunday 7 February 2021

Order of fields can matter in a form interface

I wrote several months ago  about how the order of tuples can matter; what I didn't know at the time was that the order of the fields in an interface can also be critical. The picture below is an example of a form interface, not the one that is discussed underneath the picture.

It happened that I was asked to create an interface for invoices where the invoice had to have a specific project number; as I had already developed the interface, I simply added the project number at the end of the list of fields. Although theoretically this should be sufficient, it turned out that each customer is connected to some project that is not the project that should appear in the invoices. I explain this to myself that entering the customer number causes the customer's project to be entered into the 'project' field of the screen, and that the latter project number insertion fails because there is already a value there. 

The interface worked as my customer desired when the specific project number was the first field to be entered; again, the insertion of the customer's project failed because there was already a value in the 'projects' field.

So beware: sometimes the order of fields in an interface can be critical!

Friday 5 February 2021

A safe method of extracting 'son' parts from a bill of materials

I am often asked to prepare a type of report that requires descending through a part's bill of materials and doing something with the 'son' parts, for example showing the cost of raw materials for each part and the cost of work for that part. Such reports (or more correctly, the procedure that prepares the data to be shown in the report) use the external SONRAW procedure along with temporary PART and PARTARC tables. It also happens that the data for each primary part is to be stored separately; for example, such a report showing raw materials for an order could display the data in two different ways: either all the parts required for the complete order, or all the parts required for each line in the order. In the first case, SONRAW would be called once for the order, whereas the second case might require SONRAW to be called in a loop.

I would start such a procedure with the lines

LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0;
Then parts would be entered into the linked table in the following manner
INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) FROM ORDERITEMS, PART ORIG WHERE ORDERITEMS.PART = ORIG.PART AND ORDERITEMS.ORD = :ORD AND ORIG.TYPE = 'P' GROUP BY 1, 2; EXECUTE SONRAW :$.PRT, SQL.DATE8, :$.ARC, 0, '-N';
The above example assumes that the code is run in a loop, each time for a different order (represented by :ORD). After SONRAW, something would be done with the 'exploded tree' that is stored in the linked PARTARC table.

So far, so good. So where is the problem? It would often happen that I would add two lines before the 'INSERT INTO PART' statement -
DELETE FROM PART; DELETE FROM PARTARC;
These are linked tables, right? And so the deletion takes place on these linked tables. And yet somehow, whilst running a procedure that uses this code yesterday, the real PART and PARTARC tables were deleted! I have run such code thousands of times and there has never been a problem with it, but something must have happened to prevent the linkage and so the deletion removed the real tables. 'Fortunately' this was at the beginning of the day, so the previous day's backup was sufficient to restore the missing data.

As this is the second time in a month that I have managed to delete real tables, I am now trying to adopt the attitude that the DELETE statement MUST NEVER BE USED!!! So how can I ensure that I have empty temporary tables before the insertion of new parts? By using UNLINK and REMOVE:
UNLINK AND REMOVE PART; UNLINK AND REMOVE PARTARC; LINK PART TO :$.PRT; ERRMSG 1 WHERE :RETVAL <= 0; LINK PARTARC TO :$.ARC; ERRMSG 1 WHERE :RETVAL <= 0; INSERT INTO PART (PART, PARTNAME, FATQUANT) SELECT ORIG.PART, ORIG.PARTNAME, REALQUANT (SUM (ORDERITEMS.QUANT)) ...
This looks like code waiting to be optimised - surely unlinking and then relinking a table will take time - but this is safe code, something much more important than saving a few seconds (as opposed to shutting down a company for several hours whilst data is restored - I claim that I'm checking the disaster recovery procedure).

What does the SDK have to say on this topic? Use the AND REMOVE option if you wish the linked file to be deleted when it is unlinked. This is necessary when working with loops, particularly when manipulations are carried out on the data in the linked file. If you do not remove the linked file, and the function using LINK and UNLINK is called more than once, you will receive the same copy of the table during the next link. So, if you want the LINK command to open a new (updated) copy of the table, use UNLINK AND REMOVE. That's fairly clear for a change. So as modern day Americans say (in English that would have thrown me out of school), "my bad".

Thursday 4 February 2021

A method of displaying in a report values that can come from one of several tables

It's not exactly secret that a report in Priority can display in one field a value that can come from one of  several tables, but this useful functionality is hidden so well that it might be that many programmers never come across it. An example of what I mean would be a report that displays sales to a customer, where a sale is defined as either a delivery note (DOCUMENTS) or a direct invoice (INVOICES) - specifically screens DOCUMENTS_D and AINVOICES. Such a report would show either DOCUMENTS.DOCNO or INVOICES.IVNUM in the 'document' field. I'm sure that there is a standard report that does this, but I couldn't find it, so instead look at the standard report DISTRLISTDOC, field #120.

I often use this technique in a procedure, normally using STACK8: in one of the key fields I would store the document's A value (i.e. DOCUMENTS.DOC or INVOICES.IV), and in the second key field, I would store a value to distinguish between the two, i.e. 1 means DOCUMENTS and 2 means INVOICES. When it comes to the report, I have to add several calculated fields:

DOCUMENTS.DOC ... = (STACK8.KEY2 = 1 ? STACK8.KEY1 : 0) INVOICES.IV ... = (STACK8.KEY2 = 2 ? STACK8.KEY1 : 0) (document number) (STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) (document date) (STACK8.KEY2 = 1 ? DOCUMENTS.CURDATE : INVOICES.IVDATE)

This isn't particularly difficult to do when there are only two tables involved, but sometimes there are more, and these expressions become hairy. I was working on a report that can display data from these screens: DOCUMENTS_A, DOCUMENTS_D, DOCUMENTS_P, YINVOICES, ORDERS and PORDERS. It's a report that displays documents connected to a project, hence the variety of data. The 'selector' code (e.g. STACK8.KEY2 = 1 ? DOCUMENTS.DOCNO : INVOICES.IVNUM) has to be written at least three times (once for the document number, once for the data, once for the screen that should be opened for the document and possibly once for the customer) and of course the possibility of error grows very quickly.

The other day it occurred to me that I could remove this complexity with a simple change: instead of storing the document number and type in the STACK8 table, I could use a few fields that exist in STACK4, namely 'docno', 'datadate' and 'chardata'. The primary key is a simple incrementing variable bearing no information; DOCUMENTS.DOCNO, INVOICES.IVNUM, ORDERS.ORDNAME, etc are stored in the 'docno' field; DOCUMENTS.CURDATE etc are stored in the 'datadate' field, and DOCUMENTS.TYPE (or similar) is stored in the 'chardata' field.

INSERT INTO STACK4 (KEY, INTDATA, DETAILS, DOCNO, REALDATA2, CHARDATA, DATADATE) SELECT :LINE + SQL.LINE, :KEY, :DETAILS, DOCUMENTS.DOCNO, (-1.0) * SUM (PART.COST * REALQUANT (TRANSORDER.QUANT)), DOCUMENTS.TYPE, DOCUMENTS.CURDATE FROM DOCUMENTS, TRANSORDER, PART, DOCUMENTSA, PART MPART ...
The report that receives the data no longer has to 'decode' the document number: it simply displays STACK4.DOCNO; the same for the date. The only field that is not simplified is that the one that chooses which screen should be displayed if one clicks on the document number. I suppose that I could store the appropriate EXEC value in the stack; maybe this will come as a future improvement. The reason why I hesitate to do this is that there is no link between a table of data and the screen in which it is to be displayed. The way to do this is to add several queries such as
:EXEC = 0; SELECT EXEC INTO :EXEC FROM EXEC WHERE ENAME = 'ORDERS' AND TYPE = 'F'; ---- or ---- :EXEC = 0; SELECT DOCTYPES.EXEC INTO :EXEC FROM DOCTYPES, DOCUMENTS WHERE DOCTYPES.TYPE = DOCUMENTS.TYPE AND DOCUMENTS ....
This is a rare instance of something that is harder to do in a procedure than it is in a report. At the moment, the value in 'chardata' serves to identify which screen to display, in the following manner:
EXEC.ENAME = (STACK4.CHARDATA = 'O' ? 'ORDERS' : (STACK4.CHARDATA = 'Y' ? 'YINVOICES' : (STACK4.CHARDATA = 'A' ? 'DOCUMENTS_A' : (STACK4.CHARDATA = 'D' ? 'DOCUMENTS_D' : (STACK4.CHARDATA = 'P' ? 'DOCUMENTS_P' : (STACK4.CHARDATA = 'Q' ? 'PORDERS' : 'F'))))))

I should explain the final screen name, F. The report display some data that is not connected to a screen; It happened that when the final line was 'PORDERS' : '')))))), these data were not displayed. Priority has a screen called 'F' that is based on the DUMMY table and seems ideal for this purpose.

Tuesday 2 February 2021

Filtering on non-key fields

Nearly five years (!) have passed since I last wrote on this topic (then called 'zero values in Priority') on the other blog. I will explain again: normally, when one adds parameters to procedures, the parameter is based on a field that is guaranteed to have a value (e.g. customer number in an order, order type or similar). But it often happens that one wants a parameter on a field that is not guaranteed to have a value - today's blog will discuss filtering customers by their 'customer type' field that is not mandatory.

The original post showed how to filter successfully when a cursor is used, but I never found a satisfactory solution to filtering when tuples are saved using the 'insert/select' syntax. Looking at the old code, an answer popped into my head.

The first part is the same as ever:

:LINKED = :UNLINKED = 0; SELECT COUNT (*) INTO :UNLINKED FROM CTYPE; LINK CTYPE TO :$.CTP; ERRMSG 1 WHERE :RETVAL <= 0; SELECT COUNT (*) INTO :LINKED FROM CTYPE;

But the way that this is used is different

INSERT INTO STACK4 (KEY, INTDATA, REALDATA) SELECT ORDERS.CUST, CUSTOMERS.CTYPE, SUM (ORDERS.DISPRICE) FROM ORDERS, CPROFTYPES, BRANCHES, CUSTOMERS, CTYPE WHERE ORDERS.ORDTYPE = CPROFTYPES.CPROFTYPE AND ORDERS.BRANCH = BRANCHES.BRANCH AND ORDERS.CUST = CUSTOMERS.CUST AND CUSTOMERS.CTYPE = CTYPE.CTYPE AND ORDERS.ORDSTATUS <> -6 AND ORDERS.CURDATE BETWEEN :$.FDT AND :$.TDT GROUP BY 1, 2 HAVING SUM (ORDERS.DISPRICE) >= :$.NUM; GOTO 1 WHERE :LINKED = :UNLINKED; /* The user requested specific values of CTYPE, so remove from STACK4 any tuples with intdata (ie ctype) = 0 */ DELETE FROM STACK4 WHERE INTDATA = 0; LABEL 1;

In other words: collect and store in stack4 all the data as one would normally do. If the linked CTYPE table contains all the values, then all the suitable customers are selected: nothing new here. If only certain values are chosen in CTYPE, then all the customers with those values along with the customers with no value (i.e. CUSTOMER.CTYPE = 0) are chosen (selected).

After the insertion statement has finished, there is a check to see whether only specific values of CTYPE were chosen, and if so, then all tuples with the value 0 are deleted from the stack.