Replacing old import techniques with new

In the olden days, to get around unwanted record creation or field movement, the trigger characters were replaced with “tokens”, then reset, if desired, to their original values with another replace once the data was imported. Is there a better PanX way? Like a parameter telling the import to ignore commas?

In the past, I’d replace commas with some rare character. Then, after import, visit all the fields replacing any token character with a comma again. Another way might be to concatenate a blank record with a tab character at the beginning, before import - so commas are ignored (tab is now the field designator), then selecting and deleting that extra record after import.

I’m importing via the clipboard - maybe later replaced but pulling directly from the URL - a bunch of undisciplined text that needs to be parsed into records and fields. The “records” are designated by two FormFeeds(FF), but there are embedded FormFeeds and commas in text bodies. The same replace/replace idea worked for two FF. You’d replace the double FF with an FF+token character. Then change all remaining FFs to space - that gets rid of the extra embedded ones, then change the token back to a FF to get the end of record back before importing.

Another awkward dance was emptying a database. You couldn’t have a database without records, so you added a record with some unique content in one field, then searched for that record, then remove unselected, then cleared that search field so you’d be left with one empty record.

Is there a new/better way to empty the database? I always thought it would be fine after someone selected all the records and deleted the selection, that Panorama would - maybe after an “Are you sure?” - delete all the records and add that blank record so you would have that 1 required record without having to do the search and clear dance.

I’m starting through the training videos and I see a lot on adding records. But at this stage I’ll mostly be importing from the clipboard, seeing where my parsing scheme fails to take a situation into account, fixing that, emptying the database, and trying the import again.

I’ve seen import “helpers” but they seem designed for somewhat structured data. I’m dealing with incrementing through combinations of commas, periods, asterisks, brackets, and spaces to delimit what I’ll be pulling out as fields. It may mean also combining data from sequential records into one record - like the content of 3 fields is on one record, and the content for two more fields on the subsequent record.

I can increment my way through the clipboard text, pulling the content for various fields out and concatenating that data into another variable that is orderly/well behaved so it will import to with the correct record breaks and data distributed to their correct fields.

But I’ll be doing that over and over while developing the parsing code to handle the many variations. Aside from adding the blank “keep me” record and deleting all the rest, is there a fancier/faster way to empty the bucket and start again with an empty database - other than specifying some “replace the database with this” parameter on import?

Wait - I forgot about Jim’s super UnDo. I can check the parsing, then just undo the import, fix stuff, and import again - that will work. Or, once I get the fields all designated, make that empty database a template.

Still, is there a one-line command to dump all the data? I can write a procedure to do the search/remove unselected dance. But looking for more modern, PanX world, ways.

Ummm, how about DeleteAll.

And if you haven’t used the Text Import wizard, you’ve missed a lot of the incredible importing capabilities of Pan X.

Automating a text import depends on the question if you can regularly discern field and record separators. If that is the case, then you should be able to put all the necessary steps in a reusable procedure.

But Panorama X has another way to import text: Do you know the options of the menu command File > Import > Import Text File Into Current Database … ? It is able to import text from files or from the clipboard, is configurable (select columns, create import formulas), and it can save readymade configurations as reusable templates.

Yea! A DeleteAll, now that’s the ticket.

In the olden days I’ve used Monarch (for the PC) to wrangle text before importing. Those kinds of tools, if automated, expect a level of uniformity. Like a particular field might be in the wrong order for the desired import. But it’s uniformly in the same wrong position.

In my situation, one piece of data may be at the beginning of a block of text, at the end of that block of text, embedded between square brackets, or embedded between parentheses.

The variations are finite but different enough to warrant CASE, or IF/ELSE type statements.

At this point, it seems the easiest way is to increment through the copied text, using text funnels or regular expressions, to pull out the fields, rebuilding them in the desired order (tabs for field delimiter, CR for record delimiter) into a variable, then importing from that “good” variable.

Before I forget, thank you Gary for your Invisible character routine in the Exchange. Before, I would paste into BBedit or TextEdit, then examine the HEX rendition of the data. But those apps would change what was raw from the clipboard. For example, Two formfeeds in a row were translated to a formfeed and a backslash.

And, again, just beginning the videos - a long gap since the classes - I’m amazed at what Jim has created. Note that Microsoft and FileMaker have teams of people working on stuff. Just as Bernie (whether you agree with him or not) has stuck with his message throughout the years, So Jim has shepherded Panorama from its beginning as OverVue, to the gem it is today. And most of that time was, frankly, without Apple support. More than once, when Apple would tout FM as “the database for the Mac” - i.e., free advertising - they’d need to be reminded that there was another database alternative. In fact, we should pressure Apple to give Jim a LifeTime Achievement Award.

The tagparameter( function might also be useful depending on the text to be parsed.

Not sure if. you are aware of the importtext statement.

This has the ability to import text from a variable, file, or any formula. You can rearrange the text into the format you need as the text is imported. And there is an option to delete all of the existing data as the import starts.

FYI, the Text Import wizard uses the importtext statement. In fact, you can open a procedure, start the recorder, then use the Text Import wizard to import text. The recorder will import the code used by the Text Import wizard based on the template you have set up.

Thank you, Jim and CooperT. I started out with importtext Clipboard() to get the ball rolling. Then I saw how the records and fields were being created and Gary’s View Invisible file showed why.

I’m learning the idiosyncracies - like in the datasheet, when I put a number right-justified in a numeric field, then double click on the field to open it, the number is (if single digit) is out of view to the right. And when I drag a field name across the others to reposition it, it bounces back where it originally was. And though it has corrected itself now, when I switched a field from left or center justify to right justify, the field name stayed right justified. So more time with those intro videos.

At this point, it’s like a dog walking into a room. The dog knows it wants to be in the room but has to circle a bit before sitting down. Or maybe two ways to build a house. You can figure out all the rooms first (build forms), then bring in (import) the furniture. Or (hey, just as valid) you can have the furniture delivered first - best in summer cause it will be sitting on the front lawn - and looking at it, you’ll know what rooms you need to build for it.

Just a bunch-o-noob mistakes. I usually start out slow and catch up later - I’m at the slow stage now :smile: