I’m looking for ways to speed up very large lookups and/or to run extremely large lookups which would not normally be possible.
One of my test pairs comprises a 12,000-record database looking up another of the same size with very few matches. This means that the primary d/b records interrogate almost every target d/b record.
On an i5 iMac with 8Gb of RAM, the straightforward lookup takes just under ten minutes and the Activity Monitor shows that memory pressure is high but not into the red zone. I acknowledge that the Activity Monitor is not a perfect tool but there is no doubt that performance slows down appreciably when AM shows high memory pressure.
In an attempt to reduce the memory pressure, I tried processing the lookup in batches, using a loop to select a subset of primary records at a time. To my surprise, as each group was processed, the time increased, as did the memory pressure. In the extreme case of dividing the d/b into twelve 1,000-record blocks, the lookup took over 16 minutes and the final stages were processed at glacial pace with memory pressure constantly in the red zone.
All of that leads to one simple question: Is there any way to avoid the steady build-up of memory pressure - a memoryflush` statement would be great.