CSV import and CSV export is performed in multiple chunks because CSV files can be of very large size. Sending/receiving of a CSV document as one web service call would require the entire document to be kept in memory by most web service platforms.
You can see here the format of CSV files exported through Web Service API.
To perform CSV import the required steps are the following:
- Start a new CSV import session by calling BeginChunkedCSVImport.
- Import the chunks of a CSV document by calling AddNextCSVChunk in turns.
- Call EndChunkedCSVImport at the end to close the CSV import session.
It is crucial that the chunk boundaries are on whole characters. It should never happen that the end of a chunk contains the beginning of a unicode character and the beginning of the next chunk contains the remaining part! This can be easily implemented in .NET by using StreamReader to process the CSV file on the client.
The speed if the import is highly affected by the size of the chunks. The choice of the chuck size for import depends only on the invoking system, thus fine tuning can be performed by the consumer system.
The steps of CSV export are very similar:
- Start a new CSV export session by calling BeginChunkedCSVExport.
- Export the chunks of a CSV document by calling GetNextExportChunk in turns.
- Call EndChunkedExport at the end to close the CSV import session.
The memoQ CSV format is supported both for CSV import and export. The import file must have header information, otherwise the content cannot be imported. (The CSV files exported using TB services contain this header.)
MultiTerm export is also performed in multiple chunks for the same reason. However, only the XML content file is large in size, thus the two other descriptor files, the XDL and XDT files are returned as string parameters. The XML file is returned in chunks, like the CSV file, but both the XDL and the XDT files are returned with the first call to the BeginChunkedMultiTermExport function, as out string parameters.
You can see here the format of MultiTerm XML files exported through Web Service API.
The steps of MultiTerm export are:
- Start a new MultiTerm export session by calling BeginChunkedMultiTermExport.
- Write the two out string parameters returned by the previous function to XDL and XDT files.
- Export the chunks of the XML document by calling GetNextExportChunk in turns.
- Call EndChunkedExport at the end to close the import session.
To create a new QTerm TB and import a multiterm zip file into it, call the CreateQTermTBFromMultiterm method. First upload the multiterm zip file with the help of the FileManagerService, then set the returned file identifier as the MultitermZipFileId in the input QTermTBImportSettings object to the CreateQTermTBFromMultiterm call. The returned Guid is the guid of the newly created TB. Please note that this operation runs synchronously, so for big multiterm files the timeout might need to be set to a higher value.
TBX export is not performed in multiple chunks, only the downloading of the output does. The export uses the Tasks API and run asynchronously. It creates the TBX file on the server, which you can download via File upload/download API.
You can see here the format of TBX files exported through Web Service API.
The steps of TBX export are:
- Start a TBX export task by calling StartTBXExportTask.
- You can get the status of the task with Tasks API, you can find here further details.
- If the export has been finished you can get the result of the task via Tasks API.
- The result contains a property, called ExportedFileId. You can use this ID to download the exported file through File upload/download API.