02 October 2008

Wordcount Woes - Part 2

If you're working client-side, how many words have you paid for that translators didn't even need to touch?

I posted a couple of weeks ago on translatable words that vendors may miss in analyzing files. Alert reader arithmandar commented that slide decks can be even worse, if there is a lot of verbiage on the master slide that does not get easily captured (although Trados finds these words, according to him/her). Flash is another story altogether, and arithmandar's recommendation is that a Flash engineer should probably perform the analysis.

The other side of the coin is also unpleasant, but for the other party: Clients can hand off vast expanses of words that nobody will translate, artificially inflating the wordcount and estimate.
  • Code samples - If your documentation contains examples of code used in your product (e.g., in an API reference), there is no point in having that included in the wordcount, because nobody translates code.
  • XML/HTML/DITA/Doxygen tags - I hope your vendor is parsing these files to ignore text (especially href text) in the tags. Otherwise, not only will you get back pages that won't work worth a darn, but you'll also be charged for the words.
  • Legal language - Some companies want their license agreements, trademark/copyright statements, and other legal pages left untranslated. (Usually these are American companies.)
  • Directives - Certain directives and warnings apply to certain countries only. The documentation for computer monitors and medical devices often contains a few pages of such directives, which appear in the language of the country requiring them. There is usually set language for these directives, so free translation is not appreciated; have your colleagues in Compliance obtain the language for you, paste it in yourself, and point it out to your vendor.
Mind you, there are costs associated with finding and removing all of these words: Do you want to spend time extracting the words? Do you want to hire somebody to find and extract them? Will your savings offset those costs?

If the words to be ignored add up to enough money - as they often do for a couple of our clients - pull them all into a text file and send them to your vendor with instructions to align them against themselves for all languages in the translation memory database. That way, when the vendor analyzes your files, the untranslatable words will fall out at 100% matches.

Do you have ideas on how to handle such text?

Labels: , , , , ,

29 May 2008

Localizing Robohelp Files - The Basics

We get a lot of search engine queries like "localize Robohelp file" and "translate help project." I'm pretty sure that most of them come from technical writers who have used Robohelp to create help projects (Compiled HTML Help Format), and who have suddenly received the assignment to get the projects localized.

The short answer
Find a localization company who can demonstrate to your satisfaction that it has done this before, and hand off the entire English version of your project - .hpj, .hhc, .hhk, .htm/.html and, of course, the .chm. Then go back to your regularly scheduled crisis. You should give the final version a quick smoke test before releasing it, for your own edification as well as to see whether anything is conspicuously missing or wrong.

The medium answer
Maybe you don't have the inclination or budget to have this done professionally, and you want to localize the CHM in house. Or perhaps you're the in-country partner of a company whose product needs localizing, and you've convinced yourself that it cannot be that much harder than translating a text file, so why not try it?

You're partially right: it's not impossible. In fact, it's even possible to decompile all of the HTML pages out of the binary CHM and start work from there. But your best bet is to obtain the entire help project mentioned above and then use translation memory software to simplify the process. Once you've finished translating, you'll need to compile the localized CHM using Robohelp or another help-authoring product (even hhc.exe).

The long answer
This is the medium answer with a bit more detail and several warnings.
  • There may be a way to translate inside the compiled help file, but I wouldn't trust it. Fundamentally, it's necessary to translate all of the HTML pages, then recompile the CHM; thus, it requires translation talent and some light engineering talent. If you don't have either one, then stop and go back to The Short Answer.
  • hhc.exe is the Microsoft HTML Help compiler that comes with Windows. It's part of the HTML Help Workshop freely available from Microsoft. This workshop is not an authoring environment like Robohelp, but it offers the engineering muscle to create a CHM once you have created all of the HTML content. If you have to localize a CHM without recourse to the original project, you can use hhc.exe to decompile all of the HTML pages out of the CHM.
  • Robohelp combines an authoring environment for creating the HTML pages and the hooks to the HTML Help compiler. As such, it is the one-stop shopping solution for creating a CHM. However, it is known to introduce formatting and features that confuse the standard compiler, such that some Robohelp projects need to be compiled in Robohelp.
  • Robohelp was developed by BlueSky Software, which morphed into eHelp, which was acquired by Macromedia, which Adobe bought. Along the way it made some decisions about Asian languages that resulted in the need to compile Asian language projects with the Asian language version of Robohelp. This non-international approach was complicated by the fact that not all English versions of Robohelp were available for Asian languages. Perhaps Adobe has dealt with this by now, but if you're still authoring in early versions, be prepared for your localization vendor to tell you that it needs to use an even earlier Asian- language version.
  • Because the hierarchical table of contents is not HTML, you may find that you need to assign to it a different encoding from that of the HTML pages for everything to show up properly in the localized CHM, especially in double-byte languages.
  • The main value in a CHM lies in the links from one page to another. In a complex project, these links can get quite long. Translators should stay away from them, and the best way to accomplish that is with translation memory software such as Déjà Vu, SDL Trados, across or Wordfast. These tools insulate tags and other untouchable elements from even novice translators.
We've marveled at how many search engine queries there are about localizing these projects, and we think that Robohelp and the other authoring environments have done a poor job explaining what's involved.

If you liked this article have a look at "Localizing Robohelp Projects."

Labels: , , , , , , , ,

17 April 2008

Putting More "Sim" in your "SimShip"

How are you doing on your simultaneous shipment ("simship")? This is a common term in the industry that refers to releasing your domestic and localized products at the same time. Is your organization getting closer to simship? It shouldn't be getting further from it.

What measures have you put in place to reduce your time to market for localized versions? It's never easy to pry finalized content from writers and engineers in time to have it translated, but that's the dragon that most of us have to slay, so we focus on it a lot. How can we peel off content and get the translation process started sooner?

In the same way that eating lightly 5 times a day keeps you from getting really hungry and eating voraciously 3 times a day, we've found that handing off smaller bits of content even before they're finished keeps us from having to panic when somebody calls for a localized version.

We manage projects for a client who has the advantages of lots of sub-releases (3.1.2, 3.1.3, 3.1.5) between main releases (3.1, 3.2), and few overseas customers who want the sub-releases. (They also have the disadvantage of lacking a content management system that would make this much easier.) Even if your situation is not an exact match, you'll find that some principles apply anyway.
  • The biggest nut in the product is a 3500-page API reference guide in HTML. (Most products have a big, fat component that dwarfs all of the others.)
  • One month before each release, we assume that any new pages are about 95% final, so we hand them off for translation.
  • By the release date, we know whether we need to release a localized version of the entire product or not. If so, we proceed to hand off all of the rest of the product for translation, knowing that there will be some re-work of the new pages handed off a month before; if not, we hand off only the changed pages.
  • Thus, we almost always have pages from the API reference guide in translation. If we need them for a release, we have a lot of momentum already; if we don't need them for a release, we put the translations into our back pocket and wait until it's time for the next localized version.
This costs some more money than normal because of the inevitable re-translation that goes on, not to mention the hours refreshing the localization kit, and preparing files for translators. But this cost is acceptably low compared to the look of anguish on the international product manager's face when we have to say, "It will take about three months to finish the Korean version because of all of the change since we last localized it."

We also need to assume that, sooner or later, there will be a request for the product in certain languages. If business conditions change and the new translations never see release, then the effort has been wasted for those languages, but that's a normal business risk.

Labels: , , ,

13 July 2007

Where Translation Memory Goes to Die

Have you ever heard that you're better off not going into the kitchen at your favorite restaurant? You're likely to see a number of things you'd rather not associate with a place and a group of people you like.

The same may apply to your translation memory databases. Unfortunately, you don't have the luxury of ignoring them, because things could be dying in there and costing you money.

Let's start with this sentence:

Some interfaces use "redial function/redial context" semantics instead of using IRedial to specify both.

Any TM tool could store this string and its translation without problems. Suppose, though, that the sentence (segment, in TM terms) only looks contiguous when displayed in an HTML browser, which is a very forgiving viewer, and that the source is actually broken into three pieces:

1. Some interfaces use "redial function/redial context" semantics instead of using
2.
to specify both.
3.[HTML tags] IRedial.htm [closing HTML tags] IRedial

The text comes from include files written by engineers for engineers, and no line is longer than 80 characters. The tags come from the well-intentioned Tech Pubs team, which struggles to introduce some organization, hyperlinking and search capability to the product. This is pretty bruising to TM, which relies on being able to fuzzily match new occurrences to old occurrences of similar text. When the full sentence comes through the TM tool, its correspondence to the three broken fragments in TM is sharply impaired, and you (or I, in this case) pay for it.

It gets worse. If an engineer pushes words from one line to the next between versions, or if the tags are modified, the impact on match-rates is similarly impaired.

I've huddled with engineers, Tech Pubs and the localization house on this matter several times, with little progress to show for it, but here's a new twist:

We've offshored one of these projects to a vendor in China. Their solution was to re-align ALL of the English-language HTML pages from the previous version to ALL of the translated HTML pages of the previous version, effectively re-creating TM. They report about 20% higher match rates after doing this. I think this is because they're embracing the broken, dead segments in TM and finding them in the source files for the new version.

This seems like a counterintuitive approach, but who can argue with the benefits?

Labels: , , , , ,

11 May 2007

Localizing RoboHelp projects

Is it time for you to localize you RoboHelp projects? What's involved?

"RoboHelp project" is shorthand for "compiled help system." When this lives on a Windows client computer it is usually HTML Help (CHM) files. There are other variations like Web Help, which are also compiled HTML, but which do not run on the client.

The projects are a set of HTML files, authored in a tool such as--but not limited to--RoboHelp, then compiled into a binary form that allows for indexing, hierarchy and table of contents. Other platforms (Mac OS, Linux, Java) require a different compiler, but the theory is the same.

If you've done localization before, you'll find that RoboHelp projects are relatively easy, compared to a software project. RoboHelp (or whatever your authoring/compilation environment may be) creates a directory structure and file set that is easy to archive and hand off. It includes a main project file, table of contents file and index file. In fact, it's even possible in a pinch to simply hand off the compiled file, and have the localizers decompile it; the files they need will fall into place as a result of the decompilation.

Although you may think of the project as a single entity for localization purposes, each HTML page is a separate component. There may be large numbers of these pages that don't change from one version of your product to the next; nevertheless, you need to hand them off with the project, and you'll likely be charged for a certain amount of "touching" that the localizer's engineers will need to do. You may be able to save them some work and yourself some money by analyzing the project and determining which pages have no translatable changes, but by and large you should consider the costs for touching unchanged pages an unavoidable expense.

The biggest problem with these projects is in-country review. There's no easy way for an in-country reviewer to make changes or post comments in the compiled localized version. We've found that MS Excel is the worst way of doing this (except for all the others), so we've learned to live with it.

In theory, the translators are not mucking about with any tags, so the compiled localized version should work the same as the original. Yeah, right. All the links need to be checked--they do break sometimes--and the index and table of contents should be validated. And, don't forget to try a few searches to make sure they work; your customers surely will, and you want to spare them any unpleasant surprises.

Remember:
  • If you've included graphics in your help project, you'll need to obtain the original source files. These are not GIFs or JPEGs; they will be the application files from which the GIFs and JPEGs were generated. You'll need to hand off files from applications like Adobe Illustrator, or Flash or even PowerPoint, so that the translators can properly edit the text in them. Engineers often do quick mock-ups in Microsoft Word's Word Art that end up in the final product, and it takes a while to track them down.
  • Encoding can be thorny. Some compilers behave oddly if you try to impose the same encoding on both the HTML pages and the table of contents, especially in Japanese, in our experience.

Labels: , , , , ,

02 March 2007

Translation non-savings, Part II

Again I ask: How far will you go to improve your localization process? If a big improvement didn't save any obvious money, would your organization go for it?

I selected a sample of 180 files. In one set, I left all of the HTML tags and line-wrapping as they have been; in the other set, I pulled out raw, unwrapped text without HTML tags. My assumption was that the translation memory tools would find more matches in the raw, unwrapped text than in the formatted text.

I cannot yet figure out how or why - let alone what to do about it - but the matching rate dropped as a result of this experiment.























Original HTML Formatting and TagsUnwrapped, unformatted text
100% match and Repetitions65%51%
95-99% match9%14%
No match9%15%

This is, as they say in American comedy, a revoltin' development. It means that the anticipated savings in translation costs won't be there - though I suspect that the translators themselves will spend more time aligning and copy-pasting than they will translating - and that I'll have to demonstrate process improvement elsewhere. If I can find an elsewhere.


True, the localization vendor will probably spend less time in engineering and file preparation, but I think I need to demonstrate to my client an internal improvement - less work, less time, less annoyance - rather than an external one.

Labels: , , , , , ,

26 February 2007

Translation non-savings, Part I

How far will you go to improve your localization process?

Because of how localization is viewed in many companies, the best improvements are the ones that lower cost. Low cost helps keep localization inconspicuous, which is how most companies want it.

But if a big improvement didn't save any obvious money, would your organization go for it?

Elsewhere in this blog I relate the saga of the compiled help file with 3500+ HTML pages in it. These pages come from a series of Perl scripts that we run on the header files to extract all of the information about the product's API and wrap it up in a single, indexed, searchable CHM. In a variety of experiments, we've sought to move the focus of translation from the final HTML files to a point further upstream, at or near the header files themselves. If the raw content were translated, we believe, all downstream changes in the Perl scripts, which get revised quite often, would be imposed automatically on the localized CHM.

One of the biggest cost items - we have suspected - is due to changes in line wrapping and other HTML variations that confuse TM tools into thinking that matches are fuzzier than they really are. The false positives look like untranslated words when analyzed, so the wordcounts rise, and not in our favor.

"If we work with raw text, before HTML formatting," our thinking goes, "the match rate will rise."

Not.

I'll describe my experiment shortly.

Labels: , , , , ,

30 January 2007

Localization Train slowing

We're seeing the localization juggernaut lose some steam.

In the early years, this client localized its flagship software package for developers in China, Japan and Korea (CJK), then added Brazil. It took small, reference applications into as many as 10 languages (including Hebrew and Thai) as those markets showed promise. The budget was pretty fat, the localized products were freshened frequently, and the developers were happy to have software and doc in their own language.

I suppose it was to be expected that this would peter out with time, because markets change, business cases wax and wane, and some regions never return the investment.

The new stressor on localization was less easy to anticipate: bulk. Each generation of improvements to the product brings several hundred more pages of documentation. All of this new documentation is, of course, "free" in English, but somebody has to pull out a checkbook to deal with it in other languages, and that checkbook comes out more slowly and with more misgivings these days.

Engineering and Product Management furrow their brow nowadays when I walk in with cost estimates. I've adapted to this change in attitude with a few techniques:
  1. The Technical Reference is the fattest target and the source of most of the expansion. It lives in a compiled help file (CHM) that is no longer written by Tech Pubs, but generated by Perl scripts from header files written by the engineers. Our modus localizandi has been to hand off the finished help project, now comprising 3700 HTML files, and have the HTML translated. In an effort to lower cost, I'm attempting a proof-of-concept to localize the header files themselves, then tune the scripts to convert them into localized HTML. This should lower our localization engineering costs considerably.
  2. I agitate for interim localization updates, peeling off documentation deltas every few weeks and handing them off for translation, even if there are no plans to release them yet. This reduces the sticker shock and time-to-market delay that comes of getting an estimate on a release only when necessary, which may be a 10- to 18-month interval. Product Management and Engineering, who only think about localization when it's absolutely unavoidable, find the tsunami of untranslated text depressing.
  3. Although it's not a very clean way of doing things, I screen from the localization handoff those items that I know have little to be translated. Sometimes I go to the level of resource files, but more often I take documents to which only a few minor changes have been made from one En version to the next, hand off changed text, then place the translations myself. This is not for the faint of heart, nor for those who don't really know the languages involved, but it can save some money.
  4. I try to keep global plates spinning, in the hope that more people will consider the global dimension of what we do, and the fact that localization is the necessary step for making your product acceptable to people whose use of your product will make you money, if you make it easy for them.
  5. I never impart bad news on Friday.

Labels: , , , , ,

07 October 2006

Localization and the Perl Script

After some cajoling, I've prevailed on our tech-writer-who-doesn't-do-any-writing to modify his Perl scripts. The changes will remove the thousands of CRLF (hard returns) in the 3700 extracted HTML files, and result in better Trados matching between the new files and translation memory.

Then, of course, it will take a few hours' perusal to see what breaks as a result of that fix.

It seems to be an unsung inconvenience of localization that
a
sentence
put
together
with
these
words
and
looking
like
this

separated by hard returns in the raw HTML file (which you can see by viewing source in a browser) becomes

a sentence put together with these words and looking like this

when viewed in a browser. The translation memory tools, of course, see the hard returns and try in vain to match accordingly, but they can result in a fair bit of head-scratching to those viewing the files only through a browser.

Labels: , , ,

30 September 2006

The Localization Consultant Amid His Buckets

After a few hours hunched over Beyond Compare, I've sorted the deltas between version 3.9 and version 4 into several buckets:
  1. New Content, based on filenames appearing for the first time in this version - 718 files
  2. Content Unchanged, except for the datestamp at the bottom of the page - 727 files
  3. Content Changed, but with changes that do not require translation (HTML tags, formatting) - 1517 files
  4. Other, including content with translatable changes and anything else - 319 files
My hope is that the vendor can hand off to the Japanese translators only those pages in which there is real translation work, then internally take care of #2 and #3 with search-and-replace and other engineering techniques to bring the 3.9 pages into parity with the 4.0 pages. For that matter, I could probably do the engineering myself, except that: 1) it's boring work; and 2) the vendor needs to update translation memory with the results.

We'll see how this goes. It doesn't help that the original English files have a lot of formatting errors in them, and that errors in the Perl scripts wipe out the content on several dozen pages and toss them into the CHM blank.

Labels: , ,

20 September 2006

Segmentation and Translation Memory

To get the broken sentences in the new files to find their equivalents (or even just fuzzy matches) in translation memory we have three options:

  1. Modify the Perl scripts that extract the text from the header files into the HTML, so that the scripts no longer introduce the hard returns.
  2. Massage the HTML files themselves and replace the hard returns with spaces.
  3. Tune the segmentation rules in Trados such that it ignores the hard returns (but only the ones we want it to ignore) and doesn't consider the segment finished until it gets to a hard stop/period.
To go as far upstream as possible, I suppose we should opt for #1 and fix the problem at its source. This seems optimal, unless we subsequently break more things than we repair. Options #2 and #3 are neat hacks and good opportunities to exercise fun tools, but they burn up time and still don't fix the problem upstream.

Also, I don't want the tail to wag the dog. The money spent in translating false positives may be less than the time and money spent in fixing the problem.

Labels: , , , ,

01 August 2006

So the API Ref weighs in at 3280 HTML pages now, about 750 more than in the last release.

The trick will be in figuring out which of these zillions of pages have substantive changes (i.e., new translatable text, changed translatable text) and which have changed due to non-translatable issues (i.e., changes to the HTML code inside the tags). Translation memory tools are meant to ignore the latter, but I can't leave good translation inside outdated HTML; something is bound to break, or at least look bad, if we shuffle multiple generations of HTML code and tag conventions together and compile it.


I don't think the TM tools are going to rescue me from this. I should figure out a way to translate the source header files instead of the downstream HTML files.

Labels: