Sunday 6 December 2020

Continuing the MAILMERGE topic

After successfully developing the letter, my next task was to write a procedure that would be executed as a direct activation from the orders screen and would send the letter via email. My original code was something like this

LINK ORDERS TO :$.PAR; ERRMSG 1 WHERE :RETVAL <= 0; :FNAME = 'X:/1/TEST_SENDORDLETTER.pdf'; EXECUTE WINHTML '-d', 'TEST_ORDLETTER', 'ORDERS', :$.PAR, '-pdf', :FNAME; :EMAIL = '....'; MAILMSG 10 TO EMAIL :EMAIL DATA :FNAME;

I'm leaving out all the changes that were added later - I ended up using the interface for mail messages instead of MAILMSG and that's not really relevant.

Every time that I tried this, the procedure worked fine and sent the mail containing the letter. Every time the user wanted to run it, the letter was created but contained only with the logo and signature: the important part of the letter was missing.

It took some time to figure out what was missing: I had originally run the mail merge program and had defined what the format was to be. Someone running the 'envelope' procedure as a direct activation would not get the chance to define the format of the mail merge.  OK: there is a paragraph in the SDK that shows how to overcome this -

/* Set print format */ :EXEC = 0; SELECT EXEC INTO :EXEC FROM EXEC WHERE TYPE = 'P' AND ENAME = 'TEST_ORDLETTER'; UPDATE PRINTFORMAT SET VALUE = -102 /* required format */ WHERE EXEC = :EXEC AND USER = SQL.USER;

I've used the same code (albeit with a different ENAME and format) for a similar program and there this code worked fine. But the print format was still not being set for the user. At first, I thought that the user would have to run the mail merge procedure once from the menu in order to set the print format, which is when the light finally went on in my head. The above code will fail because there is no record in the PRINTFORMAT to update! So I added the code to insert the necessary values into that table; if the insert succeeds, the code jumps over the 'update' statement. Looking at this again, the 'update' statement should come first, and only if this fails should there be the 'insert'. 

The above is not written in the SDK!

There are two pitfalls here: (1) The inner procedure has to appear in a menu that the user has permission for; (2) The format has to be set.

Thursday 3 December 2020

The Letter Generator and Mail Merge

One of my coworkers requested a few days ago that I create a form letter for a given customer order in which certain data from the order would be displayed. I wasn't initially aware of any possibility of doing so in Priority, but after a quick look through the SDK, I discovered that the possibility not only exists but is seemingly simple.

As usual, it was very difficult to understand anything from the SDK, so what follows is my explanation. The goal is to create a form letter with data fields, like this

Dear <customer name>, <address> <contact's telephone number>
With regard to your order <purchase order number>, we are pleased to inform you that we intend to supply the order on <supply date>.

How does one do this? My explanation is as follows: 

First, one creates a report that contains all the necessary fields for the mail merge. The standard example is a report called CUSTLETTERREP; this report can be used whenever one wants mail merge based on customers. As I needed it for customer orders, I had to create a similar report that is based on ORDERS. One creates what seems to be a standard report, but with no parameters; these are supplied by joining ORDERS.ORD to a field called LETTERSTACK.KEY1. Also, LETTERSTACK.USER should be the current user.

Once this is done, one copies the procedure CUSTLETTER to a new procedure (let's call it TEST_ORDLETTER) that is very similar to an HTML document. There are a few changes that need to be made: 

  1. The Input stage should reference ORDERS.ORDNAME, not CUSTOMERS.CUSTNAME
  2. HTMLCURSOR now becomes SELECT ORD FROM ORDERS WHERE ORD > 0
  3. SQLI (stage 30) now becomes
    /************ Get ORD from HTMLVALUE by priform ************/ :$.ORD = ATOI(:HTMLVALUE); :$.CLN = 'ORDERS.ORD'; INSERT INTO LETTERSTACK (USER, KEY1, KEY2, STATUSTYPE) VALUES (SQL.USER, :$.ORD, 0, 'O'); SELECT ORDNAME INTO :$.NUM FROM ORDERS ORIG WHERE ORD = :$.ORD;
  4. The final stage is now TEST_ORDERSREP instead of CUSTLETTERREP
Maybe that sounds very complicated but it's actually reasonably simple.

Then one goes to the 'letter generator' program which is here: CRM > Customer Relations Management > Letter Generator. Here one defines the name of the desired letter and the name of the procedure that will create the letter (in this case, TEST_ORDLETTER). If this is the first time that this procedure has been listed, there will be a pause of a few minutes before one can continue.

In the son form, 'free text', one obviously enters the form letter. I wondered at first how one selects the fields that appear in the letter: look carefully and one will see that there is a new icon that has been added to the icons in the text form. To the right of the picture of a diskette now appears an icon that looks like a rubbish bin (that can't be right!); pressing on it brings up a list of fields that were defined in the TEST_ORDERSREP report. This works in the same way that one can add a field to be displayed in a business rule.

I have designed my procedure to be called as a direct activation from the ORDERS screen but it doesn't have to be. However the procedure is invoked, a list of possible formats is displayed from which one must choose (in the same way that one chooses a format for an HTML document) - the names of these formats are the names of the desired letters than one entered in the letter generator.

Now one can read the documentation in the SDK and make sense of it.

Thursday 26 November 2020

An important definition in 'HTML documents'

I had occasion today to program an 'HTML document' that enables me to document (no pun intended) an interface that reads an external file. A standard report exists that lists the fields defined in the interface but there is no means of extracting the SQL part of the interface, which is the more interesting part that requires documenting. Hence the HTML document.

As usual, I was faced with the problem of how to remove the frame around the document's header. I can never remember how to do this so I am documenting it here. In the report that contains the header, one goes to the 'HTML definitions' son form and looks for the field '# Page Columns'; this will probably be the first field in the third tab. This should be set to 1.


Another field that is useful in this son form is 'Show Title? ' - this should be set to D when one wants the report's title to be displayed and left empty when one doesn't want the title. Normally reports that display a table should have their title displayed, whereas reports that show comments or similar should not have a title.

Tuesday 10 November 2020

Don't forget to unlink GENERALLOAD in form triggers

I wrote a post-update trigger for a form yesterday; the code seemed correct when I looked at it, but it didn't work when added to the form. The code itself took some data from the form, inserted them into the GENERALLOAD table then called an interface in order to do something in another form. I extracted the code and rearranged it slightly to become a self-standing procedure so that I could debug it; the procedure worked properly. So why didn't the trigger work? 

The answer half came to me when walking the dog at 5:30 am (always a productive time for mental debugging): check other triggers for this form to see whether they linked the GENERALLOAD table, and more importantly, whether they unlinked the table. And of course, I found another post-update trigger that I wrote that linked GENERALLOAD but did not unlink it.

In the context of the form, when the second trigger came to link GENERALLOAD, the link failed because the table was already linked. Had I been clever enough to display a warning message if the link failed, instead of skipping to the end of the trigger, then I would have known immediately what the problem was.

Moral of the story: ALWAYS UNLINK GENERALLOAD at the end of a trigger!!

This isn't as clear cut as one would like to think. Normally one writes code such as

EXECUTE INTERFACE 'STATUSMAILQ', SQL.TMPFILE, '-L', :$.G2; SELECT MESSAGE INTO :PAR1 FROM ERRMSGS WHERE USER = SQL.USER AND TYPE = 'i'; ERRMSG 99 WHERE :RETVAL > 0;

The question is: does ERRMSG cause the linked table (:$.G2 in this case) to be unlinked? I suspect not, so alternative methods have to be found in a trigger - probably using WRNMSG, then unlinking the table, then jumping to the end of the procedure. Or unlinking directly after the 'execute' line and before the 'select' line; this way, RETVAL will hold the result of the SELECT statement.

Tuesday 13 October 2020

The third way

I have written before about requirements that seem to be mutually incompatible. I faced another problem like that yesterday and I want to share its solution. I call this kind of solution "looking for the third way" that requires us looking past the blinkers on our eyes.

The factory floor has a report that prints out a list of work orders, along with a barcode that encodes the work order, activity and quantity. The report works well as such but has problems in defining which work orders should be printed: the upholstery department receives a list of customer orders and has to obtain the work orders connected to these. The department was informed verbally of the required orders; seeing as the order numbers were derived from a report, I suggested that the department be sent this report - as Excel (ugh) - and then paste the order numbers into the orders form in order to get the appropriate work orders (the procedure takes care of this).

So far so good. Unfortunately, the report containing the order numbers is produced on a per-line basis and so the same order number can (and does) appear on several consecutive lines. The orders form cannot accept such a list of numbers, rejecting the duplicates on the basis that "key already exists". Here is the mutual incompatibility: on one hand, we have a list of customer orders that may include duplicates, whereas on the other hand we have a form that requires discrete order numbers.

It didn't help that the Excel file was created from the HTML report, making it useless, as opposed to creating an Excel file directly.

The solution came to me whilst walking the dog at 5:30 am this morning. The real problem is that the orders form cannot accept duplicates; the insight is to do away with the orders form, replacing it with a form that can accept duplicates. Such a form would be based on a table that is not linked to the orders table - I defined for it three fields (user, line and 'ordname') and built a form on this basis where the primary key is composed of the user and line number. The 'ordname' field has a post-field trigger that stores the user number and current line into the appropriate fields, thus the same order number can be entered several times - each line will have a different internal number.

I wrote a procedure that displays an initial parameters form, including a check box as to whether the user wants to input order numbers from Excel. Assuming that this check box is marked, my special form will then be displayed into which the user can paste the numbers from Excel. Upon closing the form, a piece of code inserts the distinct order numbers into a linked orders tables. This is then used as the basis for querying the work orders.

Conclusion: try to see past the constraints of the system! Sometimes I tear my hair (what's left of it) when I hear some of the requests from the CEO, but he has an advantage on me in that he doesn't know what is seemingly impossible to do with the system. He is often right, and requires me to think out of the box, or see past the blinkers.

Friday 2 October 2020

Further work on the fault analyser (PrioXRef) - beware of linked files

I mentioned the other day a 270 line Priority procedure that I used as a test for the cross referencer. This procedure was at the proof of concept stage: it did what it was supposed to do but made several assumptions. I improved it to be more general, a conceptual change that required many changes throughout the procedure, that amongst other things builds a new purchase part with specific characteristics. At the end of the procedure, a purchase order is created that includes this new part. As such, the part needs to have a price in the supplier's price list based on the base part of the new part - whilst creation of the purchase order was part of the POC procedure, addition of the part to the price list wasn't.

After adding the appropriate code for the price list, I saw that it wasn't working, leading to a painful debugging session before the penny dropped. In the query that retrieved the price of the base part, I was using the table PART and this was what was causing the problem - PART was linked and of cause the actual part that I was using wasn't in the linked table. What made this error particularly galling was that I had already documented this problem.

As happens frequently these days, I woke up in the middle of the night. After dealing with my bladder, the first thought that popped into my head was that PrioXRef doesn't deal with 'UPDATE' and 'DELETE' statements; the second thought was how I could get the program to deal with the linked table problem. Unfortunately, this problem occupied my mind for quite some time and I had to take cognitive steps to banish it so that I could resume sleeping.

At the moment, I don't think that there is a way of showing this as a fault. How is PrioXRef supposed to know that the part that I am using as a parameter is not in the linked table? Of course, the problem would not exist if I had the internal part number of :SEARCHNAME and then I wouldn't have to include the PART table in this query. 

/* Get price from supplier's pricelist */ SELECT SPARTPRICE.PRICE, SUPPRICELIST.SUPPLDATE, SUPPRICELIST.SUPPLIST INTO :PRICE, :SDATE, :SLIST FROM SUPPRICELIST, SPARTPRICE, PART, SUPPLIERS AND SUPPRICELIST.SUP = SUPPLIERS.SUP WHERE SUPPRICELIST.SUPPLIST = SPARTPRICE.SUPPLIST AND SPARTPRICE.PART = PART.PART AND SUPPLIERS.SUPNAME = :SUPNAME AND PART.PARTNAME = :SEARCHNAME ORDER BY 2 DESC;
The only idea that I have at the moment is to display a warning like "[line number] warning: PART is linked". In fact, whilst PrioXRef knows that on line X that table PART is linked and is then unlinked on line Y, it can't use that information as it ignores SELECT statements. That's not exactly true: SELECT is not a keyword that triggers some handling, but INTO and FROM are such keywords. I should add some code to the FROM handling that checks whether any of the following tables are stored in the binary tree of identifiers. The FROM code should continue to be in effect until the next keyword is found (there may not be a WHERE clause, ORDER BY or GROUP BY in the query). PrioXRef has to check that the token following each part is not an alias (the above problem was solved by aliasing PART which causes the real table to be joined).

This might require a rewrite of the tokeniser as at the moment it does not save the token terminator (this would be a comma in the above example) and so getting the alias may cause the next table name to be obscured. On the other hand, there is code that pushes the next token back into the current text line to be parsed if it's not the expected token. Rewriting the alias detection code should be rewritten and changed to be a function so that it can be called at several different places.

A few people have received PrioXRef, but I have yet to receive any comment bar 'Thanks' and 'Nice idea'.


Wednesday 30 September 2020

Using the cross referencer in the wild

So far, I've been using the cross referencer to check programs that have been debugged. Even so, I am finding problems here and there, primarily forgetting to unlink tables.

This morning I wrote a new procedure that updates bills of materials; the details aren't particularly important. Before running it "in the wild", I ran the code through the cross referencer and received the following analysis (to my chagrin): :ANAME [Variable]:  used before initialised; used only once. This is exactly the sort of problem that the program is supposed to find. I then look in the references section for this variable and discover that its sole reference is on line 44. I then look in the code window and find this snippet

37: :FANAME = ''; 38: SELECT ACT.ACTNAME INTO :FANAME 39: FROM PARTARC, ACT 40: WHERE PARTARC.ACT = ACT.ACT 41: AND PARTARC.PART = :FMP 42: AND PARTARC.SON = :SMP; 43: LOOP 100 WHERE :ORIGNAME = :ANAME; /* no change, so skip */
Apart from the fact that :ANAME is referenced in line 43 and not 44, the error is obvious: the code finds a value and saves it in :FANAME, but the rest of the code blithely uses :ANAME. ERROR!!

Actually, in this program that's not too bad an error and will cause lines in the bill of materials to be updated unnecessarily, but that's not the point.


Saturday 26 September 2020

Introducing the Priority procedure cross referencer and fault analyser

 Over the months, I have collected various mistakes that I have made whilst programming procedures in Priority and that were not noted by the in-built syntax checker. I want to write a program that will check things that are not caught by the internal program, to supplement it and not replace it. Simply put, I want to write an external syntax checker that will check things like matched LINK/UNLINK pairs, uninitialised variables and a few other problems. I've been devoting a fair amount of thought as to how to store the data of such a program; traditional cross reference programs in Pascal used linked lists, and indeed I found such a program yesterday evening. But I would like to have a much more modern interface and use types such as stringlists and similar. A stringlist is ideal for storing what would have been an array of identifiers but isn't so useful when additional data regarding those identifiers is required.

I will no doubt continue to debate the subject in my mind until I commence coding; at the moment, my inclination is to take the old school cross referencer and adapt it to the Priority SQL syntax. This will be a complex task that would have to be done one way or another, so it's probably better to start with something that works so that I can concentrate on the syntax and not on how everything is stored. A cross referencer is a good idea anyway: it makes finding references to a variable much easier. I started writing a long program in Priority yesterday afternoon and finished writing and debugging this morning: this is 270 lines long which is fairly long but not too complicated. During the writing process, I moved pieces from place to place within the program (primarily moving non-variant operations out of loops, to be technical) and sometimes these edits slightly mangled the text. I discovered a new bug: a variable will always start with a colon (e.g. :DAYS); in the course of one of these edits and pastes, I had a variable named ::DAYS which is not the same as :DAYS. 

A cross referencer helps in finding variables that appear only once; this can mean that either the variable is superfluous as it is never used, or more problematic, it is a variable without value (as in the above case of ::DAYS). I wasted an hour yesterday on another procedure, trying to figure out why a value being saved in a variable was not being written later on. Eventually I saw the problem: the value was saved in :PARTCOST but was later accessed as :PARCOST. A cross referencer would find this immediately.

After spending more than a few hours over the weekend working on the cross-referencer, I have completed the first version.

As in the army, everything divides into three. For this program, the first stage is parsing the input file, then displaying the references and finally displaying the analysis. The first stage can also be split into three: the tokeniser, the lexical analysis and the storage. A token is a string extracted from a text file; for example, if the current line is 'select part, partname from part', then there are five tokens: 'select', 'part', 'partname', 'from' and again 'part'. In programming languages with regular syntax, the tokeniser is normally quite straight-forward, but it turns out that the procedural SQL language of Priority does not have regular syntax and cannot be considered to be context free.

Two examples of the ad hoc syntax: I want to note when a variable is initialised and when it is not. Initialisation can occur in one of two forms: either there is an equals sign after the token (e.g. :SEARCHNAME = '12345') or the keyword INTO precedes the token (e.g. SELECT DAY INTO :DAYS). These two opposite options (one prefix and one postfix, to use the technical terms) make it complicated to program. Another syntactic problem is the colon - :. Normally this serves to mark variables, e.g. :DAYS, but it can also be used to separate between two clauses in a ternary comparison (e.g. :DAYS < 7 ? 3 : 5). 

The correct tokenisation of table aliases (e.g. GENERALLOAD F1) took quite a bit of time.

Storage of the identifiers and their references is by means of a binary tree; this part was based on the cross referencer that I found a few days ago which was written in standard Pascal. The references are stored in a queue for each node. I added a few fields to these variable types in order to store further information: the type of identifier (variable, cursor, table) and the operation in progress at the reference (e.g. variable initialisation, opening a cursor, linking a table). This part was simple. Displaying the references was also fairly straight-forward.

The analysis part is dependent on the type of identifier: there are certain checks for variables, certain checks for cursors and certain checks for tables. I found a method to make these checks as stream-lined as possible.

I tested the program by running it alternately on a short test file into which at times included deliberate errors (so that I could check that the errors were being picked up) and on the file for the procedure that I wrote a few days ago. Every time I would look at the references, noting mistakes that had to be fixed. Now I'm 99% confident that I've correctly parsed the files and have correctly denoted variable initisalisation (this was very complicated). Running the finished program on my procedure finds three variables that were initialised and never used. These can be safely deleted from the procedure.

My next step is to publicise the program within a small community, inviting examples of procedures whose analysis appears to be wrong. Maybe there are other checks that need to be added.

Thursday 10 September 2020

Smaller barcodes (not a pitfall but a tip)

There is an increasing tendency in my environment to print barcodes in reports, especially for work orders, so that these can be scanned into machines or back into Priority.


The size of the barcode is proportional to the contents of the barcode, so it can often be that the barcode is long (the contents of the above are 200416121400001, which apparently is a program name for a CNC machine). So long in fact, that unless one takes steps, the barcode is liable to be split over two lines, which is definitely wrong! The way to overcome this in a report is to go into the 'HTML design' sub-form and enter the value W into the field 'position in cell'. I don't know why W was chosen - maybe it's short for 'Wide'.

Even so, the barcode can still be very large and here is a method that can shrink it. In order to display a barcode, one chooses the font 'Free3of9' and supposedly one has little control over the size. In order to fix this, one opens the following option: System Management > System Maintenance > Advanced Design > Font Definitions. The screen looks like this


The standard font 'Free3of9' is the selected line; in the 'size' column one can see that its size is 8, whatever that is (big!). What I did was to create a new line, call the font 'Free3of9,6' using the same font, and in the 'size' column I entered 6. This is the second line in the screen. A barcode whose font is this new fort will be 75% the size of the original. I also defined a font of size 4 but that seems too small!

Last remark about barcodes: one has to place an asterisk before and after the field to be displayed! So the barcode at the beginning was defined as STRCAT ('*', PARTSPEC.SPEC2, '*').

Thursday 27 August 2020

Beware of the dollar sign (continued)

I have found a much simpler method of obtaining a naked dollar sign to be inserted within a string, using ENTMESSAGE. One defines a message number whose text is simply $. Here is the code which (to me, at least) is self explanatory.
:PAR1 = 'TEST@TEST.COM'; SELECT ENTMESSAGE ('$', 'P', 10) INTO :PAR2 FROM DUMMY; /* $ */ :PAR3 = STRCAT (STRPIECE (:PAR1, '@', 1, 1), :PAR2, STRPIECE (:PAR1, '@', 2, 1)); WRNMSG 99;
As a result of these statements, PAR1 will be TEST@TEST.COM, PAR2 will be $, and PAR3 will be TEST$TEST.COM. Using ENTMESSAGE is a better method as it relies only on itself - it doesn't rely on a specific value in the CURRENCIES table (which might not exist) nor does it rely on an entry in a special constants table.

Note that the ENTMESSAGE statement uses '$' to denote 'the current procedure'.

Monday 24 August 2020

Beware of the dollar sign!

At a company for which I am doing piecework programming, I was asked to fix a procedure that was giving an error message about being unable to create a file. I looked at the procedure (one which prints invoices); the problematic code was trying to create a string whose contents - basically the invoice number and customer email - would be displayed as a barcode in the final document.

Why is this problematic? For those who don't know, the character set that can be displayed in a barcode is severely limited to upper case characters, digits and a few characters such as * and $, but not @. Email addresses have the '@' character, so the original programmer had to find a way of replacing this with the '$' character.

In order to achieve this, the original programmer used a baroque set up of creating the initial string (with '@'), writing it to a file, then running the FILTER program three times (!) in order to get the file contents (i.e. the string) into the required format; one run replaced @ with $, one run turned the entire string into upper case; I don't know (nor care) what the third run did. This file was then loaded via an interface into a simple table from which the string was extracted and eventually displayed. Breathtaking in its ingenuity but totally misguided. To be fair, the procedure might have been written for a much earlier version of Priority in which certain functionality might be missing.

The actual problem that the client faced was that the intermediate files were being written to the root of disk C: which is of course a no-no. I simply changed the file directory and the procedure started working. But the sheer complication of this procedure irked me and I was sure that I could find a simpler way of creating the required string.

Here is my first attempt:
SELECT TOUPPER (CUSTOMERSA.EMAIL) INTO :EMAIL FROM CUSTOMERSA, INVOICES WHERE CUSTOMERSA.CUST = INVOICES.CUST AND INVOICES.IV = :$.IV; SELECT STRCAT ('*E', IVNUM, STRPIECE (:EMAIL, '@', 1, 1), '$', STRPIECE (:EMAIL, '@', 2, 1), '*') INTO :$.BC FROM INVOICES WHERE IV = :$.IV;
If the invoice number were IV200001 and the email test@microsoft.com, the resulting string would be expected to be '*EIV200001TEST$MICROSOFT.COM*'. Unfortunately this would not be the result. Using the naked dollar sign ('$') causes the preprocessor in the parser to replace this with the name of the procedure, resulting in something like 
'*EIV200001TESTTEST_WWWSHOWCIV2MICROSOFT.COM*'! This is normally a good thing as it enables one to pass the current procedure name to the ENTMESSAGE function as $, without denoting the name - as a result, the same code can be copy/pasted between procedures without problem.

A method of obtaining the naked dollar sign without using the naked dollar sign is required! After some lateral thinking, I came up with the following, but unfortunately it too inserts the first letter of the procedure's name instead of the dollar sign.
SELECT TOUPPER (CUSTOMERSA.EMAIL) INTO :EMAIL FROM CUSTOMERSA, INVOICES WHERE CUSTOMERSA.CUST = INVOICES.CUST AND INVOICES.IV = :$.IV; :DOLLAR = '1$1'; SELECT STRCAT ('*E', IVNUM, STRPIECE (:EMAIL, '@', 1, 1), STRIND (:DOLLAR, 2, 1), STRPIECE (:EMAIL, '@', 2, 1), '*') INTO :$.BC FROM INVOICES WHERE IV = :$.IV;
Incidentally, I don't see the above as being programming in Priority; it's got nothing to do with working with a database engine. Instead, it's more general programming in the context of the Priority programming language, which isn't something that can be easily taught. One has to remember that in a procedure, '$' is going to be expanded into the procedure's name.

I finally figured out how to solve the problem - use the code of currency -2, which is ... $ (at least, in companies where the default currency is NIS; for companies where the default currency is dollars, the number is -1). The below works in a test procedure on my server, but I will have to implement it fully on the client's server in order to be sure.
SELECT TOUPPER (CUSTOMERSA.EMAIL) INTO :EMAIL FROM CUSTOMERSA, INVOICES WHERE CUSTOMERSA.CUST = INVOICES.CUST AND INVOICES.IV = :$.IV; SELECT STRCAT ('*E', INVOICES.IVNUM, STRPIECE (:EMAIL, '@', 1, 1), CURRENCIES.CODE, STRPIECE (:EMAIL, '@', 2, 1), '*') INTO :$.BC FROM INVOICES, CURRENCIES WHERE INVOICES.IV = :$.IV AND CURRENCIES.CURRENCY = -2;
The above code did not work on the client's server as they have defined the code of currency -2 to be USD. Tired of knocking my head against a brick wall, I swiftly defined a personal table of constants, defined a constant with the value '$' then inserted the appropriate code into the procedure. This finally works!!

Tuesday 4 August 2020

Don't be a miser with brackets

In reports, there are three very useful 'group functions': T, S and B, that can be used on columns containing numerical data. T causes the total of the column to be presented at the end of the report, S causes sub-totals to be presented and B causes both sub-totals and totals to be presented. Unless one is displaying oranges and apples in the same column, B is normally the preferred option.

One can also use these functions on calculated columns; for example, I have been working on a report which displays in one column expected costs, in another column the actual costs and in the third column the ratio of actual to expected costs. 

Let's say that the report looks like this

Expense expected actual % ratio
Gas 160 170 106.25
Electricity 188 13571.81
Water 207 197 95.17
Totals 555 502 90.45

In order to achieve this, the definition of '% ratio' is (in words) 100.0 times the actual cost divided by the expected cost. Assuming that 'expected' has a column number #100 and 'actual' has a column number #110, the definition of column 120 will be 100.0 * #110 / #100. The first two columns should have the group function B (or T) in order to display their total at the end, and the ratio should have the group function b (or t). 

Sometimes this works, sometimes it doesn't.

This simple version might well have worked, but I had to change it in order to prevent division by zero; let's say that there are actual costs that were not budgeted. In such cases, Priority displays 100 * #110, which is completely wrong! There should be a condition that if expected = 0, then the result is zero, else it's what the formula calculated. On in Priority-speak, #100 = 0 ? 0 : 100.0 * #110 / #100.

This formula certainly handles division by zero properly and also displays the correct ratio in each row, but now the total percentage ratio is no longer 90.45 but 273.23 - the sum of the percentages in each row, and not the percentage of the total!

I know that group function b works because I have a report that displays the total percentage correctly. So why doesn't it work here? After a great deal of unsuccessful magic incantations, I was forced to call in someone else. She too did not know the reason off-hand, but she compared the formula here to the successful formula in another report, and noticed that here I was using the ternary comparison expression (taken from C; read it as 'if #100 is zero then return 0 else return 100.0 * #110 / #100). 

It turns out that the parser in Priority is very temperamental when it comes to the order of evaluation of expressions and especially the ternary expression; the parser requires a great deal of help which one supplies by means of brackets. When these are applied liberally, the 'b' group function is finally honoured in the totals line: (#100 = 0 ? 0 : (100.0 * ( #110 / #100)).

The use of brackets is somewhat inconsistent in Priority and reminds me of the UNLINK command: one can often get away without using it, but sometimes it is essential and so one should always use it. Apparently the same as with brackets: don't be a miser with them!

Also it seems that one has to use the row numbers (#100, etc) instead of actual row fields (e.g. STACK4.REALDATA).

Saturday 25 July 2020

Don't mess with standard interfaces

A week ago, I wrote a procedure which iterates over a set of work orders and sets their status to 'cancelled'. In order to strengthen the cancellation, I decided to disconnect the work order from the connected customer order line by writing zero in the appropriate field. As the interface for updating the status of a work order only contains the work order number and status, I decided to add the customer order line, and because I need to write zero into this field, I marked the flag 'accepts null values'. I checked my procedure and it worked fine.

A few days later, someone drew my attention to the fact that after printing a work order (which changes its status from 'approved' to 'printed'), the connection to its customer order line was lost. For you, the reader, the reason should be obvious, but for me it wasn't. I checked the work orders form which - unusually - was free of any customisation which I might have done. There are rules defined for the screen, but they are only for me; one says that the status should be changed to 'cancelled' if the order line is zero - this is not what is happening and anyway, the problem was happening to other users.

Very problematic. I tried rebuilding the screen but this had no effect. As it happens, I had added to a daily report data about work orders which was based on the connected customer order line. There was data for the first two weeks in July but a mysterious disappearance this week. From this, I was able to conclude that the problem occurred only during the past few days. I didn't remember programming anything to do with the work orders form, and anyway I had already checked that there were no customisations.

Then inspiration struck: there is a module which I rarely use that prepares updates: one can choose a selection of changes that have been made in order to package them and transfer them to another installation. I opened this module and requested all the changes that had been made in the past week - all kinds of changes appeared. But one stood out: the change to the standard interface for updating the status of work orders.

Then the penny dropped: the procedure that prints a work order also calls a procedure called DocStat which updates the status of the work order to 'printed', presumably by means of the standard interface which I had modified. This procedure doesn't know about the field that I added and so the value passed would be 0 (null). And because I defined that this field can accept null values, this value was indeed passed ... and disconnected the work order from the customer order line. Ooooops! Red face time (again).

Immediately I removed my field from the interface whereupon printing ceased to disconnect. I then altered my original procedure to update the work orders table directly (nu nu nu) in order to disconnect.

Moral of this story: don't mess with standard interfaces!

Saturday 18 July 2020

More about transferring data from screen triggers to procedures

I wrote two months ago about calling a procedure from a screen trigger, showing this code
LINK CUSTOMERS TO :FILE; INSERT INTO CUSTOMERS SELECT * FROM CUSTOMERS ORIG WHERE CUSTOMER = :$$.NSCUST; EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYCUSTOMER', 'CUSTOMERS', :FILE; UNLINK AND REMOVE CUSTOMERS;
I didn't write about the procedure that this code calls, namely TEST_COPYCUSTOMER. I want that the procedure can be called from a menu, where the user enters a list of customers, as well as this background call, where there is no user interface. In order to accomplish this, the procedure has two stages: the first is an INPUT stage, which has one parameter named PAR which is linked to the customers table, and the second stage is an SQLI stage that does the actual work.

An improvement that appears in later Priority versions (possibly from 17, definitely from 18) is that there is a new file type, NFILE. In previous versions, if someone entered * (i.e. all records), then all the records in the given table would be copied into the linked table. This can take time, so the new file type, NFILE, was created: the documentation says "Select NFILE if the linked file comprises a group of records and you want the link table to remain empty when the user enters * or leaves the field empty".

What the documentation doesn't show is when and how to use this new file type. I use it for parts, customers, suppliers and sometimes customer orders and inventory documents, when there is a strong possibility that the user will leave the parameter empty, meaning 'choose all records'. There's no point in using an NFILE if the user is always going to choose a list of values.

How is the NFILE used in practice?
LINK CUSTOMERS TO :FILE; /* NFILE */ ERRMSG 1 WHERE :RETVAL <= 0; GOTO 1 FROM CUSTOMERS WHERE CUST > 0; UNLINK CUSTOMERS; LABEL 1;
If the user left the parameter empty (or *), then there will be no records in CUSTOMERS, and so the statement GOTO 1 FROM CUSTOMERS WHERE CUST > 0 will not execute, thus unlinking the CUSTOMERS table, meaning that future references to this table will mean the original table. But should the user enter even one customer number, the branch to label 1 will be taken and future references will mean the linked table.

Now back to the procedure TEST_COPYCUSTOMER: for reasons which at the moment escape me, the PAR parameter being passed from the screen trigger to the procedure was empty. As CUSTOMERS is defined as an NFILE and is empty, the linked table is unlinked and so the procedure works on ALL the records in the CUSTOMERS table - not what I wanted! In order to prevent this, I defined CUSTOMERS as a regular FILE, accepting that there would be a performance hit when the procedure is called directly from a menu.

But why is the PAR parameter empty? I don't know (as yet), but when I removed the BACKGROUND part of the statement EXECUTE BACKGROUND ACTIVATE, the code worked properly. It appears that there is a difference between installations, as this code worked properly at one site but did not at another site.

Another point to notice is the use of ACTIVATE: normally I would use the external program WINACTIV. The documentation states that The difference between the WINACTIV command and the ACTIVATE and ACTIVATF commands is that WINACTIV has a user interface, meaning that you can define a progress bar and/or messages that require a response from the user (using a PRINTF, PRINTCONTF or CHOOSEF command) and these will be visible to users while the procedure is running, whereas the ACTIVATE and ACTIVATF commands will not display these elements. As such, the WINACTIV command cannot be used when working with the Priority web interface. The client where the code did not work properly uses the web interface as well as the traditional interface so I thought it prudent to use ACTIVATE instead of WINACTIV.

To conclude, my original code used two optimisations, namely BACKGROUND and NFILE. These optimisations may not work on some systems.

[Edit from 28/02/23] I faced a similar situation today: a procedure is executed automatically every few minutes, and if it finds a candidate order, that order is passed to another procedure that does something complicated to the order. The calling procedure was as follows
SELECT SQL.TMPFILE INTO :NEW FROM DUMMY; LINK ORDERS TO :NEW; GOTO 99 WHERE :RETVAL <= 0; INSERT INTO ORDERS SELECT * FROM ORDERS ORIG WHERE ORD = :ORD; EXECUTE BACKGROUND WINACTIV '-P', 'TEST_PROC', 'ORDERS', :NEW; UNLINK AND REMOVE ORDERS
Whilst the test procedure was activated, it never received an order number. Eventually the solution dawned upon me: the temporary file was being deleted before the test procedure could get its hands on the file. Solution: remove 'background' and wait for the called procedure to complete before deleting the file.

Monday 13 July 2020

Order of tuples can matter in a form interface

One can suggest that there are several kinds of form interface: those which update one form, those that update two forms and so on. Of those that update two forms (father and son forms) there can be multiple lines to be updated in the son form. Apart from the obvious limitation that the information for a father form has to come before its sons, the order of lines within the son form is normally  irrelevant.

Yesterday I came across a case where the order of lines within the son form is vitally important, which is what I want to write about today. Specifically I am writing about reporting completed work orders: the father form is "Reporting Production" and the son form is "Production Details". We have begun working with several actions within a work order: to prepare the upholstery for a chair, the cloth has to be cut, the seat has to be made by glueing together wood and plastic, and then the cloth has to be stretched over the seat and sewn together. We are trying to introduce a system whereby each worker reports each action by means of a personal wireless barcode scanner.

Unfortunately, what management wants is not what is happening, so it frequently happens that someone will report only the final stage in the process. Doing so closes the work order but leaves the inventory in an unstable state, so it is desired that reporting the final stage will also report any previous unreported stages. I wrote code which is activated after a barcode is read: first it enters into a load table the number of the current production report (I open one report per day for the department), then a line is inserted for the action which is encoded in the barcode. Following this, a check is made to see whether this is the last stage in the work order, then if so, any previous unreported stages are added to the load table, which can look like this:

LineRecord typePayload
11Report number
22Action as encoded in the bar code
32Prior unreported stage in work order
42Prior unreported stage in work order

The first two lines were doing their job but the final, additional, lines did not cause their stages to be reported. I looked at this for some time before I had an inkling of what might be the problem: reporting the final stage of a work order causes itself to close automatically. Then when the extra lines come along, the work order is closed and so will reject any extra lines. My solution was to number the barcode line as 99, so the work table looks like this prior to its being inserted:

LineRecord typePayload
11Report number
992Action as encoded in the bar code
22Prior unreported stage in work order
32Prior unreported stage in work order

The lines are handled according to their line number, and so line 2 is handled before line 99 even though it was physically inserted after line 99. This indeed solves the problem.

Lesson of the story: sometimes the line number within a load table for a form interface is very important.

Thursday 2 July 2020

Field names/titles in tables

There is a standard table, STACK11, which I use frequently, as it has three key fields. It also has several real fields, but unfortunately has only two integer fields. So I added to this table two new fields, TEST_INTDATA3 and TEST_INTDATA4 and called them INTDATA3 and INTDATA4 respectively.

At some stage, I discovered that I was having problems with this table. The following code should fail the syntax checker but passes:
LINK STACK11 TO :$.S11; ERRMSG 1 WHERE :RETVAL <= 0; INSERT INTO STACK11 (KEY1, KEY2, KEY3, INTDATA1, INTDATA2, INTDATA3) VALUES (1, 2, 3, 4, 5, 6); UNLINK STACK11;
Not only does it pass the syntax check, but when one selects the fields from the table, TEST_INTDATA3 has the value 6! Putting this simply, Priority has a bug in the syntax checker for procedures which allows it to get confused if the name of a field is the same as the title of a user-added field. The moral of the story is to give the field a different title, such as TEST_INTDATA3 (i.e. the title is the same as the field name).

Incidentally, any attempt to add STACK11.INTDATA3 to a report fails; this is because the name of each field is checked when adding to a report. Any name which is not in the directory is immediately marked as being false. This is not a syntax check.

Sunday 28 June 2020

Beware of subroutines in form triggers

I wrote a complicated form trigger - post-update - which is supposed to extract lines from a customer order and insert them into another order. Despite the fact that the code looked correct (and of course, there were no syntax errors), it wasn't working. Every query was checked in WINDBI and was found to be correct. Eventually I was reduced to inserting WRNMSG statements at every decision spot in order to find out why the code wasn't working as expected.

Eventually I found what the problem was: the trigger was calling a sub-procedure defined at the end of its code that returns the number of non-closed lines in the order. I used a sub-procedure as the code is called at two separate places in the trigger, when the number of non-closed lines can vary from the first invocation to the second, if lines are extracted to another order and closed in this order.

The sub-procedure simply wasn't being called. Why not, I asked myself? In semi-desperation, I changed the sub-procedure's number from SUB 900 to SUB 1234 - and then the code worked! This means that some other trigger must define a sub-routine 900 and this is what is being called (if anyone is interested, SUB 900 is defined in the trigger ORDERS/UpdateDiary). 

So what can be concluded from this exercise? All the triggers of a form must be considered to be part of the same name space, which means that each sub-routine should have a separate number. No more SUB 900 - use more imaginative numbers! This is a much more insidious problem than the #INCLUDE problem. In the same way that I now use non-obvious names for cursors in triggers (and let us not forget that the name of a cursor in a trigger must either begin or end with @), one must also use non-obvious numbers for sub-routines.

Friday 29 May 2020

Calling a procedure from a screen trigger

I am starting work as a sub-contractor (i.e. someone else is the consultant who interfaces with the client and defines their work procedures whereas I'm the programmer who implements those procedures in Priority) for a company that wants to have one master company in which are defined the customers, suppliers, parts, etc. and satellite companies in which the actual transactions take place. My primary task is to write interfaces that will automatically transfer data from the master company to the satellites. 

For some tables (e.g. supplier type), new/updated data will be copied to all the satellites, whereas for other tables (e.g. customers), the user can decide to which satellites will be copied the new/updated customer. Upon hearing this, I realised that I have to define a table with four fields: the id of the new datum, the type of the new datum, satellite name and whether the data should be copied to this satellite. I also developed a form based on this table; this forms will be a 'son' form to all the forms that allow a choice of satellite. This form is easy to define as it takes its inspiration from another general form, EXTFILES. The fun starts in the copying procedure.

Within Priority, there exists a documented method for sending data from one procedure or trigger to another procedure or report. I use this frequently when building procedures which by means of the scheduler will send reports by mail. The procedure uses a local copy of one of the 'stack' tables (normally STACK which has precisely one field - ELEMENT - which is of course the table's primary key). For example,
:GROUP = 'NO_CNC'; EXECUTE WINACTIV '-R', 'TEST_NOPROD_METAL', 'STACK', :$.STK, '-g', :GROUP;
My original thought would be to do something similar in the form's post-form trigger: pass the parent form's id number to a procedure that then iterates through the chosen satellites, invoking an interface to do the actual addition to the satellite. As each procedure/interface is specific to a given form (i.e. the code necessary for updating customers is similar but different to the code necessary for updating suppliers), there is no need to pass the data's type (customer, etc) to the procedure. As only one datum is required, I could use the STACK table as shown above.

The communal form's POST-FIELD trigger had code like this:
GOTO 2 WHERE :TEST_CHANGED = 0; SELECT SQL.TMPFILE INTO :FILE FROM DUMMY; LINK STACK TO :FILE; INSERT INTO STACK (ELEMENT) VALUES (:$$.NSCUST); GOTO 2 WHERE :RETVAL <= 0; GOTO 1 WHERE :$$.EXTTYPE <> 'p'; /* not a part */ EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYPART', 'PART', :FILE; GOTO 2; LABEL 1; GOTO 1 WHERE :$$.EXTTYPE <> 'C'; /* not a customer */ EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYCUST', 'PART', :FILE; GOTO 2; LABEL 1; ... LABEL 2; UNLINK AND REMOVE STACK;
This seemed reasonable but it didn't work! I spent a very frustrating hour discovering that the procedure was not receiving data. It transpires that every procedure has to have its data passed in a linked table of the correct type: the 'copy part' procedure has to receive a linked table of parts and the 'copy customer' procedure has to receive a linked table of customers, etc. So the final code became
GOTO 2 WHERE :TEST_CHANGED = 0; SELECT SQL.TMPFILE INTO :FILE FROM DUMMY; GOTO 1 WHERE :$$.EXTTYPE <> 'p'; /* not a part */ LINK PART TO :FILE; INSERT INTO PART SELECT * FROM PART ORIG WHERE PART = :$$.NSCUST; EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYPART', 'PART', :FILE; UNLINK AND REMOVE PART; LABEL 1; GOTO 1 WHERE :$$.EXTTYPE <> 'C'; /* not a customer */ LINK CUSTOMERS TO :FILE; INSERT INTO CUSTOMERS SELECT * FROM CUSTOMERS ORIG WHERE CUSTOMER = :$$.NSCUST; EXECUTE BACKGROUND ACTIVATE '-P', 'TEST_COPYCUSTOMER', 'CUSTOMERS', :FILE; UNLINK AND REMOVE CUSTOMERS; LABEL 1; ... LABEL 2;
Using the BACKGROUND parameter after EXECUTE means that the actual copying occurs in the background and so allows the user to continue working without interruption.

One final problem to look out for: users must be able to execute the copying procedures, which means that they have to appear in a menu for which regular users have access. 

Monday 11 May 2020

Appropriating the 'HTML document' framework

It all started when I was asked to prepare a report for the CEO ... well, three reports if you don't mind, and the CEO would like to have them all in one email. My thinking process went something like this: to send the reports in an email, we will have to build a letter and attach the files to the letter (I've done this before), but the only way that I know to save a report automatically as a file (so that it can be attached) is to save it as a pdf file, and the only way that I know to accomplish this is to use an 'HTML document', like a delivery note or an invoice. Anyway, there's no connection between the three reports so there's no way I can define a suitable HTML document.

Or is there? Looking at this problem laterally, an HTML document is built from several sub-reports, all of which display data which is connected to a single item, be it order, delivery note or invoice. But when one looks closely at this, there's no mandatory reason that all the sub-reports have to look at the same item, or even look at it at all. I could pass an item which is guaranteed to exist (like part 000 or customer www) then ignore it.

So I set about writing an HTML document from scratch: this used to be a daunting experience, especially seeing as it is almost totally undocumented and very fragile, but now I've got some experience in this and know that I am capable of the required programming. Speaking abstractly, there are three sections to an HTML document: the first is composed of three (or possibly four) stages, these being INPUT, HTMLCURSOR, possibly HTMLEXTFILES and SQLI. The second section comprises the various reports which need to be displayed, and the final section is another INPUT stage where various parameters are displayed.

The input stage generally has one parameter named PAR that gets linked to a table holding the values which are being passed to the document. The table can have one row or many, depending on how the document is being invoked (one row if invoked from a form, several rows if invoked from a menu). The HTMLCURSOR stage seems like black magic: normally one would visibly link the PAR parameter to a table but here it seems to be done automatically (presumably based on information in the INPUT stage). This stage simply consists of a statement like SELECT ORD FROM ORDERS WHERE ORD > 0; this assumes that PAR consists of a linked table of orders. Ignoring the HTMLEXTFILES stage for the moment (this selects any attachments or 'ext[ernal] files' connected to the given record), the SQLI statement at a minimum seems to be a recapitulation of the HTMLCURSOR stage as it extracts the key field from the current record and saves it as a parameter for the sub-reports.

But nowhere does it say that the sub-reports have to use this key field! Obviously, a normal HTML document would use this key field, but here I am appropriating the framework of the HTML document and I couldn't care less about this key field. It seems that I have to extract it though and even pass it to the sub-reports (where it gets ignored) otherwise the document doesn't display.

The SQLI stage can also issue SQL queries and store the results in linked temporary files for the following sub-reports to use. As two of the three reports which are to be displayed require some data mining, I can do this during the SQLI stage.

Assuming now that I have an HTML document which actually displays the three disparate reports, how do I invoke it to create a PDF file then send an email with the PDF file attached? I've done this bit before: one uses the WINHTML program to create the document and save it as a PDF, then one builds an email and sends it (I wrote about this some years ago)
LINK PART TO :$.PAR; GOTO 99 WHERE :RETVAL <= 0; INSERT INTO PART SELECT * FROM PART ORIG WHERE PARTNAME = '000'; :FNAME = '../../system/mail/TEST_ALL_IN_ONE.pdf'; EXECUTE WINHTML '-d', 'TEST_WWWALLINONE', 'PART', :$.PAR, '-pdf', :FNAME; UNLINK PART; SELECT ENTMESSAGE ('$', 'P', 10) INTO :SUBJECT FROM DUMMY; LINK GENERALLOAD TO :$.GEN; GOTO 99 WHERE :RETVAL <= 0; INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT, TEXT2) VALUES (1, '1', :SUBJECT, 'someone@somewhere.com'); INSERT INTO GENERALLOAD (LINE, RECORDTYPE, TEXT) VALUES (2, '2', :FNAME); EXECUTE INTERFACE 'TEST_SENDMAIL', SQL.TMPFILE, '-L', :$.GEN; :LETTER = 0; SELECT ATOI (KEY1) INTO :LETTER FROM GENERALLOAD WHERE LOADED = 'Y' AND RECORDTYPE = '1'; LINK MAILBOX TO :$.MB; INSERT INTO MAILBOX SELECT * FROM MAILBOX ORIG WHERE MAILBOX = :LETTER; EXECUTE SENDMAIL :$.MB, :XMSG; /* send it */ UNLINK MAILBOX; UNLINK GENERALLOAD; LABEL 99;
I don't imagine that I will frequently use this technique of making a fake HTML document composed of unrelated sub-reports, but it is an interesting technique.

Wednesday 29 April 2020

LIKE cannot accept a variable

I wanted to write a form trigger which is dependent on the value of a certain field (the number of a previous order). Unfortunately, this field is not bound but rather a simple text field, which means that instead of it holding values like KL191234 (this would be the previous order number), it holds values like 1234, 191234 or even 1234/5. This is problematic, but one evening the answer came to me: use the standard SQL keyword LIKE, where the order number is preceded by *. Thus KL191234 will be matched by *1234 or *191234 (but not *1234/5). On this basis I wrote the following code, where TEST_PREVORDER holds the value of the previous order
:PREVORD = 0; SELECT ORD INTO :PREVORD FROM ORDERS WHERE ORDNAME LIKE STRCAT ('*', :$.TEST_PREVORDER) AND CUST = :$.CUST; GOTO 99 WHERE :RETVAL <= 0;
The syntax checker told me that there was an error with STRCAT. In order to combat this, I placed the STRCAT function before the query, assigning its value to a variable, then used this variable in the query. This did not help matters.

Eventually I realised what the problem was - LIKE (at least, in its Priority definition) cannot accept a variable as its parameter; it has to be a 'naked' string like '*T' or 'T*'. As usual, this doesn't seem to be defined anywhere, but I discovered that I had commented on this a few years ago (not on this blog).

So add this to the growing list of improvements for Priority SQL. The syntax checker
  1. should check that every cursor which is opened is also closed.
  2. should check that every LINK has a matched UNLINK (this is probably very difficult)
  3. should allow LIKE to take a variable as its parameter
The final code became
/* Stupid code required because LIKE cannot accept a variable as a parameter */ :TPO = :$.TEST_PREVORDER; :TLEN = STRLEN (:TPO); GOTO 4 WHERE :TLEN = 4; GOTO 6 WHERE :TLEN = 6; GOTO 8 WHERE :TLEN = 8; GOTO 99; LABEL 4; :TPO = STRCAT (ITOA (YEAR (SQL.DATE8) MOD 100), :TPO); LABEL 6; :TPO = STRCAT ('KL', :TPO); LABEL 8; :PREVORD = 0; SELECT ORD INTO :PREVORD FROM ORDERS WHERE ORDNAME = :TPO AND CUST = :$.CUST; GOTO 99 WHERE :RETVAL <= 0;
Note that the code 'falls through' the labels: if TPO is '1234', then the code will jump to label 4, where TPO becomes '191234'. Then the code for length 6 executes: TPO becomes KL191234. Then the code for length 8 executes - which is the real code. In other cases, there would be another GOTO on the line before 'LABEL 6' but here it is not required.

Tuesday 28 April 2020

Removing a private field from a form

Disclaimer: all of the below is written to the best of my knowledge. As this topic is barely documented, I have to base my comments on my experience which may well be limited. In other words, there may be a simple way of getting around the pitfall described below of which I am simply unaware.

Extra disclaimer: the pitfall described below seems to a problem specific to one company. I cannot reproduce the error on my system.

The scenario: a client wants to add a new personalised field to an existing form. My normal way of doing this is to add the required field to the base table of the form and then add the field to the form which will display it; a more complicated method involves adding the field to a continuation table (good examples of this are the service calls and projects forms). Built-in triggers on every form are responsible for loading and saving all data which is displayed on the form which is derived from the base table, whereas the developer is responsible for loading and saving data which comes from a continuation table. In other words, if I add a field to a base table and then to a form, I don't have to worry about it being loaded or saved. 

The problem: after a few days' testing, the client decides that she doesn't need the added field. OK: the automatic, unthinking, simple solution is simply to remove the field from the form by deleting the definition. WRONG!!!! Form preparation, after adding the field, will modify the built-in triggers to include the new field (if it was defined as belonging to the base table) and so deleting the field from the form will cause these triggers to scream that they are missing a field when a user accesses this form. What is worse is that 'form preparation' does not detect this problem. The developer has no way of accessing the built-in triggers so it is not possible to remove from them the references to the new field. 

The correct way to 'remove' the added field would seem to be to hide it, not delete it.

In my opinion, the bug is that after deleting the field, form preparation does not update the built-in triggers and instruct them to remove the added field.   

Ironically, displaying the field via a continuation table does not cause this problem as the field is loaded and saved 'manually' and so the developer can simply remove the references to the added field.

[Edit from 09/22: the same problem happened at another company running Priority 21. Fortunately we were working on a test server so the problem will not be propagated to their real server.]

Sunday 26 April 2020

Welcome

Welcome to 'Programming Pitfalls in Priority'! 'Programming' is to be read as an adjective, not a verb, so I'm writing about the pitfalls which await programmers in the Priority ERP system. I am not affiliated with this company; I work for a normal manufacturing company as an ERP developer and trainer. I have been working with Priority for 13 years and before that with a simpler ERP system, so I have plenty of experience in my field. 

In this blog I will be writing about pitfalls which I personally have come across in the Priority system during the course of my work. Generally these topics are not documented or hinted at in the Software Development Kit (SDK) and so they are quite advanced. As my descriptions and solutions are based on my own work, there may be better ways of overcoming the pitfalls; I am not privy to internal documentation and so am forced to depend on my own ingenuity, sometimes with the help of colleagues.