B26 unexpected quit with segmentation fault

Started trying my usual weekly run of a long (> hour if I don’t break it down into segments) automated series of multi db procedures. It stopped roughly half way through with an unexpected quit of PanX. There have been several seemingly identical such today as I’ve been tracing the problem. Running under terminal with enough zlog statements added eventually pointed to this statement:
FormulaFill NOTE+ str(aggregate({«Chart Number»}, "count", {««ChargeRef»»=«Charge Reference» and «Who Paid»="1" and "IP" contains TransType}, "WS Payment XRef", true()))
A zlog statement the line before in the procedure (".NoteToDos") in db “LinkedCharges&Pays” showed in Terminal, the zlog the line after didn’t show. Instead the next output in Terminal was:

/Users/johnbovenmyer/Applications/Run Panorama X Using Terminal: line 2: 7224 Segmentation fault: 11 /Applications/PanoramaX.app/Contents/MacOS/PanoramaX


Saving session...

...copying shared history...

...saving history...truncating history files...


I don’t think that procedure had changed in a couple years. I’d never needed to add zlog statements to it before, unlike several other procedures in this run. I’m running an intel laptop under Catalina. The one obvious change is B26. An identical FormulaFill statement did run about 20 code lines earlier, albeit filling a different selection. I half recall a problem with unexpected quits a couple years back, I think also involving FormulaFills including aggregating a different db which also reported segmentation faults. It had me sticking with an earlier beta version for awhile, which didn’t cause it. I think I resolved that by recoding it to employ summarytable(, which might be a work around for this. So I don’t know if the earlier problem resolved with subsequent betas or not. But I’d never had problems here before. My net move is to track down that old fix to see whether I can apply it here. But I though it worth reporting and maybe PanX’s automatic crash reporting will show something useful at Jim’s end. The files and code are too big to ask Jim to review, even if they weren’t medically confidential.

As an aside, my investigations were slowed because my instrumentation settings often didn’t stay turned on. Mostly regarding db “LinkedCharges&Pays”. I wonder whether my unusual naming pick several years back, including “&”, could have anything to do with that. I’ve periodically noticed that behavior for months.

FYI, these have shown up in the crash reports I get. Unfortunately, these particular reports aren’t really providing any particularly useful diagnostic information. The crash is occurring in low level code which fetches information from the data in RAM, code which is called thousands of times per second. So it’s not immediately obvious why the crash is occurring in this particular situation.

I can’t rule that out, but it seems unlikely. The crash report does show the stack at the time of the crash, and none of the code involved changed in the b26 release. Most of the changes in b26 involved how Panorama interacts with macOS. This crash doesn’t involve macOS at all, it entirely involves internal Panorama code.

If this is possible, it would almost certainly be way faster. Your existing code is scanning the entire current database (via formulafill), and for every record in the current database it is completely scanning the WS Payment XRef database. Depending on how large the two databases are, you may be effectively scanning millions or even tens of millions of records.

My educated guess is that the problem is somehow caused by this scanning within a scan. But I don’t know why it would have worked before and not work now, there have been no recent changes in this code.

Thanks for looking at your end! I’d guessed the same but with the coincidence of B26 dropping I wanted reassurance I wasn’t heading down a dead end before attempting to code around it. I’ve found what I half recalled from couple years back. Although that problem wasn’t quite as close as I’d thought I think its summarytable methodology should work. Instead of a formulafill in db1 calculating an aggregate of related db2 for each record, have summarytable calculate all the aggregates at once, import that into a predesigned ‘utility’ db3 then join the results back into db1. My recollection is the end product was indeed significantly faster the last time and wasn’t as difficult to code as I’d initially feared. It will doubtless take me a few tries to get the details correct, but the concept is clear. Moreover this procedure has seven instances of nearly exactly the same statement. Figure it out once, practically cut and paste the other six.