20 September 2006

Segmentation and Translation Memory

To get the broken sentences in the new files to find their equivalents (or even just fuzzy matches) in translation memory we have three options:

  1. Modify the Perl scripts that extract the text from the header files into the HTML, so that the scripts no longer introduce the hard returns.
  2. Massage the HTML files themselves and replace the hard returns with spaces.
  3. Tune the segmentation rules in Trados such that it ignores the hard returns (but only the ones we want it to ignore) and doesn't consider the segment finished until it gets to a hard stop/period.
To go as far upstream as possible, I suppose we should opt for #1 and fix the problem at its source. This seems optimal, unless we subsequently break more things than we repair. Options #2 and #3 are neat hacks and good opportunities to exercise fun tools, but they burn up time and still don't fix the problem upstream.

Also, I don't want the tail to wag the dog. The money spent in translating false positives may be less than the time and money spent in fixing the problem.

Labels: , , , ,

0 Comments:

Post a Comment

<< Home