Hi,
I just imported a split .dat file and appended the table based header rows from the LNDB export to the top of the newly split data. I next imported the properly formatted .dat file into LNDB using the Tools -> Import Data LNDB feature. It imported duplicate records into the database for each record that had previously existed. This is a huge glitch in the LNDB import feature. Would it be logical to add a "Select unique key(s)" button so that the duplicate values are not imported? This has created quite a mess!
AMK
* Last updated by: AMK on 1/14/2013 @ 6:10 PM *
LNDB uses the Timestamp and record number to determine duplicates. Did both of the files include these fields?
Also, what database are you using?
Thanks, Dana
Hi and thanks for the reply.
I'm using SQLServer with LNDB. For an unknown reason, LNDB stopped sending data to the database for a period of time this summer, but the data were being saved to the .dat file.
So I wrote a Split .par file to convert the .dat array based data to the format LNDB wants to import and appended the TOA5 headers from LNDB to this newly split .dat file. The new TOA5 file has it's own RecNum since it didn't know anything about the LNDB RecNum so the newly created .dat file had different TmStamp-RecNum keys than the LNDB table, so it duplicated everything except the period of time missing in the LNDB. Is this confusing enough?
LNDB does not support the import of array-based files. That's on the list for a future enhancement, but it is not supported in LNDB 1.1.
As I mentioned, LNDB relies on the timestamp & record number to avoid duplicate records when importing. Mixed array dataloggers do not store a record number in the array.
Dana
I understand that, which is why I used Split to create a TOA5 formatted file from an array based file and then imported that into LNDB. Doing it this way creates a non unique pair of TmStamp and RecNums when compared with the data in LNDB. I guess one way around this would be to let the user who is importing data into LNDB select her primary key (ei. TmStamp and LoggerID), or something other than the RecNum field. Just thoughts, I see what I did wrong now and won't do it again.
Thanks,
AMK
Hello,
I am doing this same thing - it worked for two sets of data and now I am getting a problem where the import is stopping 80.29% of the way to the end of my .dat file and reporting:
61948 record(s) processed.
61948 record(s) inserted.
0 record(s) failed to be inserted.
0 record(s) were duplicate.
My SPLIT Input file select field looks like this:
x=x+1.0,edate("m/d/yyy hh:nn";4;3;2),y=x+100000.0,y,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17
so I would have unique record numbers..
here is the header from the DBExport file:
"TOA5","","","","","","",""
"TIMESTAMP","RECORD","Year_RTM","Day_RTM","Hour_Minute_RTM","ws_in_S_WVT","ws_in_U_WVT","wd_540_DU_WVT","wd_540_SDU_WVT","ws_in_MAX","temp_F_AVG","RH_AVG","Battery","H2S_amb_AVG","H2S_amb_MAX","H2S_amb_Hr_Min_MAX","H2S_raw_AVG","min_per15"
"TS","RN","","","","","","","","","","","","","","","",""
"","","","","","","","","","","","","","","","","",""
maybe the SQL DB I am putting the data into doesn't have any space left?
When that happens to me, I look at the last line that was imported into the database and then look at that line in the .dat file. In all cases, there was some improperly formatted line in the .dat file. Removing that line fixed the issue in all cases.
Maybe CSI has a direct array based importer script now though????
The LNDB development team has the import of array-based .DAT files on the list of features to consider implementing in future versions of LNDB, but no work on that has begun yet.
brianwallace56 thanks a lot! :) helped me :)
Did you solve it AMK?
Thanks!