"Fossies" - the Fresh Open Source Software Archive

Member "recode-3.7.12/doc/recode.texi" (17 Feb 2022, 211303 Bytes) of package /linux/misc/recode-3.7.12.tar.gz:

Caution: As a special service "Fossies" has tried to format the requested Texinfo source page into HTML format but that may be not always succeeeded perfectly. Alternatively you can here view or download the uninterpreted Texinfo source code. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the latest Fossies "Diffs" side-by-side code changes report for "recode.texi": 3.7.11_vs_3.7.12.

The Recode reference manual

Next: , Previous: , Up: (dir)   [Contents][Index]


This recoding library converts files between various coded character sets and surface encodings. When this cannot be achieved exactly, it may get rid of the offending characters or fall back on approximations. The library recognises or produces more than 300 different character sets and is able to convert files between almost any pair. Most RFC 1345 character sets, and all character sets from a pre-installed iconv library, are supported. The recode program is a handy front-end to the library.

This manual documents Recode 3.7.12.

Next: , Previous: , Up: Top   [Contents][Index]

1 Quick Tutorial

So, really, you just are in a hurry to use Recode, and do not feel like studying this manual? Even reading this paragraph slows you down? We might have a problem, as you will have to do some guess work, and might not become very proficient unless you have a very solid intuition….

Let me use here, as a quick tutorial, an actual reply of mine to a Recode user, who writes:

My situation is this—I occasionally get email with special characters in it. Sometimes this mail is from a user using IBM software and sometimes it is a user using Mac software. I myself am on a SPARC Solaris machine.

Your situation is similar to mine, except that I often receive email needing recoding, that is, much more than occasionally! The usual recodings I do are Mac to Latin-1, IBM page codes to Latin-1, Easy-French to Latin-1, remove Quoted-Printable, remove Base64. These are so frequent that I made myself a few two-keystroke Emacs commands to filter the Emacs region. This is very convenient for me. I also resort to many other email conversions, yet more rarely than the frequent cases above.

It seems like this should be doable using Recode. However, when I try something like ‘recode mac macfile.txt’ I get nothing out—no error, no output, nothing.

Note: For the following discussion to be true, you should have something like ‘export LANG=fr_FR.ISO-8859-1’ in your environment, the important bit here being the specification of an preferred charset.

Presuming you are using some recent version of Recode, the command:

recode mac macfile.txt

is a request for recoding macfile.txt over itself, overwriting the original, from Macintosh usual character code and Macintosh end of lines, to Latin-1 and Unix end of lines. This is overwrite mode. If you want to use Recode as a filter, which is probably what you need, rather do:

recode mac

and give your Macintosh file as standard input, you’ll get the Latin-1 file on standard output. The above command is an abbreviation for any of:

recode mac..
recode mac..l1
recode mac..Latin-1
recode mac/CR..Latin-1/
recode Macintosh..ISO_8859-1
recode Macintosh/CR..ISO_8859-1/

That is, a CR surface, encoding newlines with ASCII CR, is first to be removed (this is a default surface for ‘mac’), then the Macintosh charset is converted to Latin-1 and no surface is added to the result (there is no default surface for ‘l1’). If you want ‘mac’ code converted, but you know that newlines are already coded the Unix way, just do:

recode mac/

the slash then overriding the default surface with empty, that is, none. Here are other easy recipes:

recode pc          to filter IBM-PC code and CR-LF (default) to Latin-1
recode pc/         to filter IBM-PC code to Latin-1
recode 850         to filter code page 850 and CR-LF (default) to Latin-1
recode 850/        to filter code page 850 to Latin-1
recode /qp         to remove quoted printable

The last one is indeed equivalent to any of:

recode /qp..
recode l1/qp..l1/
recode ISO_8859-1/Quoted-Printable..ISO_8859-1/

Here are some reverse recipes:

recode ..mac       to filter Latin-1 to Macintosh code and CR (default)
recode ..mac/      to filter Latin-1 to Macintosh code
recode ..pc        to filter Latin-1 to IBM-PC code and CR-LF (default)
recode ..pc/       to filter Latin-1 to IBM-PC code
recode ..850       to filter Latin-1 to code page 850 and CR-LF (default)
recode ..850/      to filter Latin-1 to code page 850
recode ../qp       to force quoted printable

In all the above calls, replace ‘recode’ by ‘recode -f’ if you want to proceed despite recoding errors. If you do not use ‘-f’ and there is an error, the recoding output will be interrupted after first error in filter mode, or the file will not be replaced by a recoded copy in overwrite mode.

You may use ‘recode -l’ to get a list of available charsets and surfaces, and ‘recode --help’ to get a quick summary of options. The above output is meant for those having already read this manual, so let me dare a suggestion: why could not you find a few more minutes in your schedule to peek further down, right into the following chapters!

Next: , Previous: , Up: Top   [Contents][Index]

2 Terminology and purpose

A few terms are used over and over in this manual, our wise reader will learn their meaning right away. Both ISO (International Organization for Standardisation) and IETF (Internet Engineering Task Force) have their own terminology, this document does not try to stick to either one in a strict way, while it does not want to throw more confusion in the field. On the other hand, it would not be efficient using paraphrases all the time, so Recode coins a few short words, which are explained below.

A charset, in the context of Recode, is a particular association between computer codes on one side, and a repertoire of intended characters on the other side. Codes are usually taken from a set of consecutive small integers, starting at 0. Some characters have a graphical appearance (glyph) or displayable effect, others have special uses like, for example, to control devices or to interact with neighbouring codes to specify them more precisely. So, a charset is roughly one of those tables, giving a meaning to each of the codes from the set of allowable values. MIME also uses the term charset with approximately the same meaning. It does not exactly corresponds to what ISO calls a coded character set, that is, a set of characters with an encoding for them. An coded character set does not necessarily use all available code positions, while a MIME charset usually tries to specify them all. A MIME charset might be the union of a few disjoint coded character sets.

A surface is a term used in Recode only, and is a short for surface transformation of a charset stream. This is any kind of mapping, usually reversible, which associates physical bits in some medium for a stream of characters taken from one or more charsets (usually one). A surface is a kind of varnish added over a charset so it fits in actual bits and bytes. How end of lines are exactly encoded is not really pertinent to the charset, and so, there is surface for end of lines. Base64 is also a surface, as we may encode any charset in it. Other examples would DES enciphering, or gzip compression (even if Recode does not offer them currently): these are ways to give a real life to theoretical charsets. The trivial surface consists into putting characters into fixed width little chunks of bits, usually eight such bits per character. But things are not always that simple.

This Recode library, and the program by that name, have the purpose of converting files between various charsets and surfaces. When this cannot be done in exact ways, as it is often the case, the program may get rid of the offending characters or fall back on approximations. This library recognises or produces around 175 such charsets under 500 names, and handle a dozen surfaces. Since it can convert each charset to almost any other one, many thousands of different conversions are possible.

The Recode program and library do not usually know how to split and sort out textual and non-textual information which may be mixed in a single input file. For example, there is no surface which currently addresses the problem of how lines are blocked into physical records, when the blocking information is added as binary markers or counters within files. So, Recode should be given textual streams which are rather pure.

This tool pays special attention to superimposition of diacritics for some French representations. This orientation is mostly historical, it does not impair the usefulness, generality or extensibility of the program. ‘recode’ is both a French and English word. For those who pay attention to those things, the proper pronunciation is French (that is, ‘racud’, with ‘a’ like in ‘above’, and ‘u’ like in ‘cut’).

The Recode program and library has been written by François Pinard. With time, it got to reuse works from other contributors, and notably, those of Keld Simonsen and Bruno Haible.

Next: , Previous: , Up: Introduction   [Contents][Index]

2.1 Overview of charsets

Recoding is currently possible between many charsets, the bulk of which is described by RFC 1345 tables or available in a pre-installed external iconv library. See Tabular, and see iconv1. The Recode library also handles some charsets in some specialised ways. These are:

The introduction of RFC 1345 in Recode has brought with it a few charsets having the functionality of older ones, but yet being different in subtle ways. The effects have not been fully investigated yet, so for now, clashes are avoided, the old and new charsets are kept well separate.

Conversion is possible between almost any pair of charsets. Here is a list of the exceptions. One may not recode from the flat, count-characters or dump-with-names charsets, nor from or to the data or :iconv: charsets. Also, if we except the data pseudo-charset, charsets and surfaces live in disjoint recoding spaces, one cannot really transform a surface into a charset or vice-versa, as surfaces are only meant to be applied over charsets, or removed from them.

Next: , Previous: , Up: Introduction   [Contents][Index]

2.2 Overview of surfaces

For various practical considerations, it sometimes happens that the codes making up a text, written in a particular charset, cannot simply be put out in a file one after another without creating problems or breaking other things. Sometimes, 8-bit codes cannot be written on a 7-bit medium, variable length codes need kind of envelopes, newlines require special treatment, etc. We sometimes have to apply surfaces to a stream of codes, which surfaces are kind of tricks used to fit the charset into those practical constraints. Moreover, similar surfaces or tricks may be useful for many unrelated charsets, and many surfaces can be used at once over a single charset.

So, Recode has machinery to describe a combination of a charset with surfaces used over it in a file. We would use the expression pure charset for referring to a charset free of any surface, that is, the conceptual association between integer codes and character intents.

It is not always clear if some transformation will yield a charset or a surface, especially for those transformations which are only meaningful over a single charset. The Recode library is not overly picky as identifying surfaces as such: when it is practical to consider a specialised surface as if it were a charset, this is preferred, and done.

Previous: , Up: Introduction   [Contents][Index]

2.3 Contributions and bug reports

Even being the Recode author and current maintainer, I am no specialist in charset standards. I only made Recode along the years to solve my own needs, but felt it was applicable for the needs of others. Some FSF people liked the program structure and suggested to make it more widely available. I often rely on Recode users suggestions to decide what is best to be done next.

Properly protecting Recode about possible copyright fights is a pain for me and for contributors, but we cannot avoid addressing the issue in the long run. Besides, the Free Software Foundation, which mandates the GNU project, is very sensible to this matter. GNU standards suggest that we stay cautious before looking at copyrighted code. The safest and simplest way for me is to gather ideas and reprogram them anew, even if this might slow me down considerably. For contributions going beyond a few lines of code here and there, the FSF definitely requires employer disclaimers and copyright assignments in writing.

When you contribute something to Recode, please explain what it is about. Do not take for granted that I know those charsets which are familiar to you. Once again, I’m no expert, and you have to help me. Your explanations could well find their way into this documentation, too. Also, for contributing new charsets or new surfaces, as much as possible, please provide good, solid, verifiable references for the tables you used2.

Many users contributed to Recode already, I am grateful to them for their interest and involvement. Some suggestions can be integrated quickly while some others have to be delayed, I have to draw a line somewhere when time comes to make a new release, about what would go in it and what would go in the next.

Please report suggestions, documentation errors and bugs at https://github.com/rrthomas/recode. Do not be afraid to report details, because this program is the mere aggregation of hundreds of details.

Next: , Previous: , Up: Top   [Contents][Index]

3 How to use this program

With the synopsis of the recode call, we stress the difference between using this program as a file filter, or recoding many files at once. The first parameter of any call states the recoding request, and this deserves a section on its own. Options are then presented, but somewhat grouped according to the related functionalities they control.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.1 Synopsis of recode call

The general format of the program call is one of:

recode [option]… [charset | request [file]… ]

Some calls are used only to obtain lists produced by Recode itself, without actually recoding any file. They are recognised through the usage of listing options, and these options decide what meaning should be given to an optional charset parameter. See Listings.

In other calls, the first parameter (request) always explains which transformations are expected on the files. There are many variations to the aspect of this parameter. We will discuss more complex situations later (see Requests), but for many simple cases, this parameter merely looks like this3:


where before and after each gives the name of a charset. Each file will be read assuming it is coded with charset before, it will be recoded over itself so to use the charset after. If there is no file on the recode command, the program rather acts as a Unix filter and transforms standard input onto standard output.

The capability of recoding many files at once is very convenient. For example, one could easily prepare a distribution from Latin-1 to MSDOS, this way:

mkdir package
cp -p Makefile *.[ch] package
recode Latin-1..MSDOS package/*
zoo ah package.zoo package/*
rm -rf package

(In this example, the non-mandatory ‘-p’ option to cp is for preserving timestamps, and the zoo program is an archiver from Rahul Dhesi which once was quite popular.)

The filter operation is especially useful when the input files should not be altered. Let us make an example to illustrate this point. Suppose that someone has a file named datum.txt, which is almost a TeX file, except that diacriticised characters are written using Latin-1. To complete the recoding of the diacriticised characters only and produce a file datum.tex, without destroying the original, one could do:

cp -p datum.txt datum.tex
recode -d l1..tex datum.tex

However, using recode as a filter will achieve the same goal more neatly:

recode -d l1..tex <datum.txt >datum.tex

This example also shows that l1 could be used instead of Latin-1; charset names often have such aliases.

Recode has three modes are for when to set the exit status to non-zero:

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.2 The request parameter

In the case where the request is merely written as before..after, then before and after specify the start charset and the goal charset for the recoding.

For Recode, charset names may contain any character, besides a comma, a forward slash, or two periods in a row. But in practice, charset names are currently limited to alphabetic letters (upper or lower case), digits, hyphens, underlines, periods, colons or round parentheses.

The complete syntax for a valid request allows for unusual things, which might be surprising at first. (Do not pay too much attention to these facilities on first reading.) For example, request may also contain intermediate charsets, like in the following example:


meaning that Recode should internally produce the interim1 charset from the start charset, then work out of this interim1 charset to internally produce interim2, and from there towards the goal charset. In fact, Recode internally combines recipes and automatically uses interim charsets, when there is no direct recipe for transforming before into after. But there might be many ways to do it. When many routes are possible, the above chaining syntax may be used to more precisely force the program towards a particular route, which it might not have naturally selected otherwise. On the other hand, because Recode tries to choose good routes, chaining is only needed to achieve some rare, unusual effects.

Moreover, many such requests (sub-requests, more precisely) may be separated with commas (but no spaces at all), indicating a sequence of recodings, where the output of one has to serve as the input of the following one. For example, the two following requests are equivalent:


In this example, the charset input for any recoding sub-request is identical to the charset output by the preceding sub-request. But it does not have to be so in the general case. One might wonder what would be the meaning of declaring the charset input for a recoding sub-request of being of different nature than the charset output by a preceding sub-request, when recodings are chained in this way. Such a strange usage might have a meaning and be useful for the Recode expert, but they are quite uncommon in practice.

More useful is the distinction between the concept of charset, and the concept of surfaces. An encoded charset is represented by:


using slashes to introduce surfaces, if any. The order of application of surfaces is usually important, they cannot be freely commuted. In the given example, surface1 is first applied over the pure-charset, then surface2 is applied over the result. Given this request:


Recode will understand that the input files should have surface2 removed first (because it was applied last), then surface1 should be removed. The next step will be to translate the codes from charset before to charset after, prior to applying surface3 over the result.

Some charsets have one or more implied surfaces. In this case, the implied surfaces are automatically handled merely by naming the charset, without any explicit surface to qualify it. Let’s take an example to illustrate this feature. The request ‘pc..l1’ will indeed decode MS-DOS end of lines prior to converting IBM-PC codes to Latin-1, because ‘pc’ is the name of a charset4 which has CR-LF for its usual surface. The request ‘pc/..l1’ will not decode end of lines, since the slash introduces surfaces, and even if the surface list is empty, it effectively defeats the automatic removal of surfaces for this charset. So, empty surfaces are useful, indeed!

Both charsets and surfaces may have predefined alternate names, or aliases. However, and this is rather important to understand, implied surfaces are attached to individual aliases rather than on genuine charsets. Consequently, the official charset name and all of its aliases do not necessarily share the same implied surfaces. The charset and all its aliases may each have its own different set of implied surfaces.

Charset names, surface names, or their aliases may always be abbreviated to any unambiguous prefix. Internally in Recode, disambiguating tables are kept separate for charset names and surface names.

While recognising a charset name or a surface name (or aliases thereof), Recode ignores all characters besides letters and digits, so for example, the hyphens and underlines being part of an official charset name may safely be omitted (no need to un-confuse them!). There is also no distinction between upper and lower case for charset or surface names.

One of the before or after keywords may be omitted. If the double dot separator is omitted too, then the charset is interpreted as the before charset.5

When a charset name is omitted or left empty, the value of the DEFAULT_CHARSET variable in the environment is used instead. If this variable is not defined, the Recode library uses the current locale’s encoding. On POSIX systems, this depends on the first non-empty value among the environment variables LC_ALL, LC_CTYPE, and LANG, and can be determined through the command ‘locale charmap’. If the current locale’s encoding may not be resolved, then Recode presumes ASCII.

If the charset name is omitted but followed by surfaces, the surfaces then qualify the usual or default charset. For example, the request ‘../x’ is sufficient for applying an hexadecimal surface to the input text6.

The allowable values for before or after charsets, and various surfaces, are described in the remainder of this document.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.3 Asking for various lists

Many options control listing output generated by Recode itself, they are not meant to accompany actual file recodings. These options are:


The program merely prints its version numbers on standard output, and exits without doing anything else.


The program merely prints a page of help on standard output, and exits without doing any recoding.


Given this option, all other parameters and options are ignored. The program prints briefly the copyright and copying conditions. See the file COPYING in the distribution for full statement of the Copyright and copying conditions.


Instead of recoding files, Recode writes a language source file on standard output and exits. This source is meant to be included in a regular program written in the same programming language: its purpose is to declare and initialise an array, named name, which represents the requested recoding. The only acceptable values for language are ‘c’ or ‘perl’, and may may be abbreviated. If language is not specified, ‘c’ is assumed. If name is not specified, then it defaults to ‘before_after’. Strings before and after are cleaned before being used according to the syntax of language.

Even if Recode tries its best, this option does not always succeed in producing the requested source table, it then prints ‘Recoding is too complex for a mere table’. It will succeed however, provided the recoding can be internally represented by only one step after the optimisation phase, and if this merged step conveys a one-to-one or a one-to-many explicit table. To increase the probability that this happens, iconv initialisation is currently inhibited whenever this option is used. Also, when attempting to produce sources tables, Recode relaxes its checking a tiny bit: it ignores the algorithmic part of some tabular recodings, it also avoids the processing of implied surfaces. But this is all fairly technical. Better try and see!

Most tables are produced using decimal numbers to refer to character values7. Yet, users who know all Recode tricks and stunts could indeed force octal or hexadecimal output for the table contents. For example:

recode ibm297/test8..cp1252/x < /dev/null

produces a sequence of hexadecimal values which represent a conversion table from IBM297 to CP1252.

Beware that other options might affect the produced source tables, these are: ‘-d’, ‘-g’ and, particularly, ‘-s’.

-k pairs

This particular option is meant to help identifying an unknown charset, using as hints some already identified characters of the charset. Some examples will help introducing the idea.

Let’s presume here that Recode is run in a UTF-8 locale, and that DEFAULT_CHARSET is unset in the environment. Suppose you have guessed that code 130 (decimal) of the unknown charset represents a lower case ‘e’ with an acute accent. That is to say that this code should map to code 233 (decimal) in the usual charset. By executing:

recode -k 130:233

you should obtain a listing similar to:

CWI cp-hu CWI-2
IBM437/CR-LF 437/CR-LF CP437/CR-LF
IBM850/CR-LF 850/CR-LF CP850/CR-LF
IBM851/CR-LF 851/CR-LF CP851/CR-LF
IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2
IBM857/CR-LF 857/CR-LF CP857/CR-LF
IBM860/CR-LF 860/CR-LF CP860/CR-LF
IBM861/CR-LF 861/CR-LF CP861/CR-LF cp-is
IBM863/CR-LF 863/CR-LF CP863/CR-LF
IBM865/CR-LF 865/CR-LF CP865/CR-LF

You can give more than one clue at once, to restrict the list further. Suppose you have also guessed that code 211 of the unknown charset represents an upper case ‘E’ with diaeresis, that is, code 203 in the usual charset. By requesting:

recode -k 130:233,211:203

you should obtain:

IBM850/CR-LF 850/CR-LF CP850/CR-LF
IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2
IBM857/CR-LF 857/CR-LF CP857/CR-LF

The usual charset may be overridden by specifying one non-option argument. For example, to request the list of charsets for which code 130 maps to code 142 for the Macintosh, you may ask:

recode -k 130:142 mac

and get:

CWI cp-hu CWI-2
IBM437/CR-LF 437/CR-LF CP437/CR-LF
IBM850/CR-LF 850/CR-LF CP850/CR-LF
IBM851/CR-LF 851/CR-LF CP851/CR-LF
IBM852/CR-LF 852/CR-LF CP852/CR-LF pcl2 pclatin2
IBM857/CR-LF 857/CR-LF CP857/CR-LF
IBM860/CR-LF 860/CR-LF CP860/CR-LF
IBM861/CR-LF 861/CR-LF CP861/CR-LF cp-is
IBM863/CR-LF 863/CR-LF CP863/CR-LF
IBM865/CR-LF 865/CR-LF CP865/CR-LF

which, of course, is identical to the result of the first example, since the code 142 for the Macintosh is a small ‘e’ with acute.

More formally, option ‘-k’ lists all possible before charsets for the after charset given as the sole non-option argument to recode, but subject to restrictions given in pairs. If there is no non-option argument, the after charset is taken to be the default charset for this recode.

The restrictions are given as a comma separated list of pairs, each pair consisting of two numbers separated by a colon. The numbers are taken as decimal when the initial digit is between ‘1’ and ‘9’; ‘0x’ starts an hexadecimal number, or else ‘0’ starts an octal number. The first number is a code in any before charset, while the second number is a code in the specified after charset. If the first number would not be transformed into the second number by recoding from some before charset to the after charset, then this before charset is rejected. A before charset is listed only if it is not rejected by any pair. The program will only test those before charsets having a tabular style internal description (see Tabular), so should be the selected after charset.

The produced list is in fact a subset of the list produced by the option ‘-l’. As for option ‘-l’, the non-option argument is interpreted as a charset name, possibly abbreviated to any non ambiguous prefix.


This option asks for information about all charsets, or about one particular charset. No file will be recoded.

If there is no non-option arguments, Recode ignores the format value of the option, it writes a sorted list of charset names on standard output, one per line. When a charset name have aliases or synonyms, they follow the true charset name on its line, sorted from left to right. Each charset or alias is followed by its implied surfaces, if any. This list is over two hundred lines. It is best used with ‘grep -i’, as in:

recode -l | grep -i greek

Within a collection of names for a single charset, the Recode library distinguishes one of them as being the genuine charset name, while the others are said to be aliases. The list normally integrates all charsets from the external iconv library, unless this is defeated through options like ‘--ignore=:iconv:’ or ‘-x:’. The portable libiconv library relates its own aliases of a same charset, and for a given set of aliases, if none of them are known to Recode already, then Recode will pick one as being the genuine charset. The iconv library within GNU libc makes all aliases appear as different charsets, and each will be presented as a charset by Recode, unless it is known otherwise.

There might be one non-option argument, in which case it is interpreted as a charset name, possibly abbreviated to any non ambiguous prefix. This particular usage of the ‘-l’ option is obeyed only for charsets having a tabular style internal description (see Tabular). Even if most charsets have this property, some do not, and the option ‘-l’ cannot be used to detail these particular charsets. For knowing if a particular charset can be listed this way, you should merely try and see if this works. The format value of the option is a keyword from the following list. Keywords may be abbreviated by dropping suffix letters, and even reduced to the first letter only:


This format asks for the production on standard output of a concise tabular display of the charset, in which character code values are expressed in decimal.


This format uses octal instead of decimal in the concise tabular display of the charset.


This format uses hexadecimal instead of decimal in the concise tabular display of the charset.


This format requests an extensive display of the charset on standard output, using one line per character showing its decimal, hexadecimal, octal and UCS-2 code values, and also a descriptive comment which should be the 10646 name for the character.

The descriptive comment is given in English and ASCII, yet if the English description is not available but a French one is, then the French description is given instead, using Latin-1. However, if the LC_MESSAGES environment variable begins with the letters ‘fr’, then listing preference goes to French when both descriptions are available.

When option ‘-l’ is used together with a charset argument, the format defaults to decimal.


This option is a maintainer tool for evaluating the redundancy of those charsets, in Recode, which are internally represented by an UCS-2 data table. After the listing has been produced, the program exits without doing any recoding. The output is meant to be sorted, like this: ‘recode -T | sort. The option triggers Recode into comparing all pairs of charsets, seeking those which are subsets of others. The concept and results are better explained through a few examples. Consider these three sample lines from ‘-T’ output:

[  0] IBM891 == IBM903
[  1] IBM1004 < CP1252
[ 12] INVARIANT < CSA_Z243.4-1985-1

The first line means that IBM891 and IBM903 are completely identical as far as Recode is concerned, so one is fully redundant to the other. The second line says that IBM1004 is wholly contained within CP1252, yet there is a single character which is in CP1252 without being in IBM1004. The third line says that INVARIANT is wholly contained within CSA_Z243.4-1985-1, but twelve characters are in CSA_Z243.4-1985-1 without being in INVARIANT. The whole output might most probably be reduced and made more significant through a transitivity study.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.4 Controlling how files are recoded

The following options have the purpose of giving the user some fine grain control over the recoding operation themselves.


With Texte Easy French conventions, use the column : instead of the double-quote " for marking diaeresis. See Texte.


This option is only meaningful while getting out of the IBM-PC charset. In this charset, characters 176 to 223 are used for constructing rulers and boxes, using simple or double horizontal or vertical lines. This option forces the automatic selection of ASCII characters for approximating these rulers and boxes, at cost of making the transformation irreversible. Option ‘-g’ implies ‘-f’.


The touch option is meaningful only when files are recoded over themselves. Without it, the time-stamps associated with files are preserved, to reflect the fact that changing the code of a file does not really alter its informational contents. When the user wants the recoded files to be time-stamped at the recoding time, this option inhibits the automatic protection of the time-stamps.


Before doing any recoding, the program will first print on the stderr stream the list of all intermediate charsets planned for recoding, starting with the before charset and ending with the after charset. It also prints an indication of the recoding quality, as one of the word ‘reversible’, ‘one to one’, ‘one to many’, ‘many to one’ or ‘many to many’.

This information will appear once or twice. It is shown a second time only when the optimisation and step merging phase succeeds in replacing many single steps by a new one.

This option also has a second effect. The program will print on stderr one message per recoded file, so as to keep the user informed of the progress of its command.

An easy way to know beforehand the sequence or quality of a recoding is by using the command such as:

recode -v before..after < /dev/null

using the fact that, in Recode, an empty input file produces an empty output file.

-x charset

This option tells the program to ignore any recoding path through the specified charset, so disabling any single step using this charset as a start or end point. This may be used when the user wants to force Recode into using an alternate recoding path (yet using chained requests offers a finer control, see Requests).

charset may be abbreviated to any unambiguous prefix.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.5 Reversibility issues

The following options are somewhat related to reversibility issues:


With this option, irreversible or otherwise erroneous recodings are run to completion, and recode does not exit with a non-zero status if it would be only because irreversibility matters. See Reversibility.

Without this option, Recode tries to protect you against recoding a file irreversibly over itself8. Whenever an irreversible recoding is met, or any other recoding error, recode produces a warning on standard error. The current input file does not get replaced by its recoded version, and recode then proceeds with the recoding of the next file.

When the program is merely used as a filter, standard output will have received a partially recoded copy of standard input, up to the first error point. After all recodings have been done or attempted, and if some recoding has been aborted, recode exits with a non-zero status.


This option has the sole purpose of inhibiting warning messages about irreversible recodings, and other such diagnostics. It has no other effect, in particular, it does not prevent recodings to be aborted or recode to return a non-zero exit status when irreversible recodings are met.

This option is set automatically for the children processes, when recode splits itself in many collaborating copies. Doing so, the diagnostic is issued only once by the parent. See option ‘-p’.


By using this option, the user requests that Recode be very strict while recoding a file, merely losing in the transformation any character which is not explicitly mapped from a charset to another. Such a loss is not reversible and so, will bring Recode to fail, unless the option ‘-f’ is also given as a kind of counter-measure.

Using ‘-s’ without ‘-f’ might render Recode very susceptible to the slighest file abnormalities. Despite the fact that it might be irritating to some users, such paranoia is sometimes wanted and useful.

Even if Recode tries hard to keep the recodings reversible, you should not develop an unconditional confidence in its ability to do so. You ought to keep only reasonable expectations about reverse recodings. In particular, consider:

Unless option ‘-s’ is used, Recode automatically tries to fill mappings with invented correspondences, often making them fully reversible. This filling is not made at random. The algorithm tries to stick to the identity mapping and, when this is not possible, it prefers generating many small permutation cycles, each involving only a few codes.

For example, here is how IBM-PC code 186 gets translated to control-U in Latin-1. Control-U is 21. Code 21 is the IBM-PC section sign, which is 167 in Latin-1. Recode cannot reciprocate 167 to 21, because 167 is the masculine ordinal indicator within IBM-PC, which is 186 in Latin-1. Code 186 within IBM-PC has no Latin-1 equivalent; by assigning it back to 21, Recode closes this short permutation loop.

As a consequence of this map filling, Recode may sometimes produce funny characters. They may look annoying, they are nevertheless helpful when one changes his (her) mind and wants to revert to the prior recoding. If you cannot stand these, use option ‘-s’, which asks for a very strict recoding.

This map filling sometimes has a few surprising consequences, which some users wrongly interpreted as bugs. Here are two examples.

  1. In some cases, Recode seems to copy a file without recoding it. But in fact, it does. Consider a request:
    recode l1..us < File-Latin1 > File-ASCII
    cmp File-Latin1 File-ASCII

    then cmp will not report any difference. This is quite normal. Latin-1 gets correctly recoded to ASCII for charsets commonalities (which are the first 128 characters, in this case). The remaining last 128 Latin-1 characters have no ASCII correspondent. Instead of losing them, Recode elects to map them to unspecified characters of ASCII, so making the recoding reversible. The simplest way of achieving this is merely to keep those last 128 characters unchanged. The overall effect is copying the file verbatim.

    If you feel this behaviour is too generous and if you do not wish to care about reversibility, simply use option ‘-s’. By doing so, Recode will strictly map only those Latin-1 characters which have an ASCII equivalent, and will merely drop those which do not. Then, there is more chance that you will observe a difference between the input and the output file.

  2. Recoding the wrong way could sometimes give the false impression that recoding has almost been done properly. Consider the requests:
    recode 437..l1 < File-Latin1 > Temp1
    recode 437..l1 < Temp1 > Temp2

    so declaring wrongly File-Latin1 to be an IBM-PC file, and recoding to Latin-1. This is surely ill defined and not meaningful. Yet, if you repeat this step a second time, you might notice that many (not all) characters in Temp2 are identical to those in File-Latin1. Sometimes, people try to discover how Recode works by experimenting a little at random, rather than reading and understanding the documentation; results such as this are surely confusing, as they provide those people with a false feeling that they understood something.

    Reversible codings have this property that, if applied several times in the same direction, they will eventually bring any character back to its original value. Since Recode seeks small permutation cycles when creating reversible codings, besides characters unchanged by the recoding, most permutation cycles will be of length 2, and fewer of length 3, etc. So, it is just expectable that applying the recoding twice in the same direction will recover most characters, but will fail to recover those participating in permutation cycles of length 3. On the other end, recoding six times in the same direction would recover all characters in cycles of length 1, 2, 3 or 6.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.6 Selecting sequencing methods

Recode can split itself into multiple parallel processes when it is discovered that many passes are needed to comply with the request. For example, suppose that four elementary steps were selected at recoding path optimisation time. Then Recode will split itself into four different interconnected tasks, logically equivalent to:

step1 <input | step2 | step3 | step4 >output

On systems where the pipes method is not available, the steps are performed in series.


When the recoding requires a combination of two or more elementary recoding steps, this option forces many passes over the data, using in-memory buffers to hold all intermediate results. If this option is selected in filter mode, that is, when the program reads standard input and writes standard output, it might take longer for programs further down the pipe chain to start receiving some recoded data.


When the recoding requires a combination of two or more elementary recoding steps, this option forces the program to fork itself into a few copies interconnected with pipes, using the pipe(2) system call. All copies of the program operate in parallel. This is the default behaviour in filter mode. If this option is used when files are recoded over themselves, this should also save disk space because some temporary files might not be needed, at the cost of more system overhead.


This option is accepted for backwards compatibility, and acts like ‘--sequence=memory’.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.7 Using mixed charset input

In real life and practice, textual files are often made up of many charsets at once. Some parts of the file encode one charset, while other parts encode another charset, and so forth. Usually, a file does not toggle between more than two or three charsets. The means to distinguish which charsets are encoded at various places is not always available. Recode is able to handle only a few simple cases of mixed input.

The default Recode behaviour is to expect pure charset files, to be recoded as other pure charset files. However, the following options allow for a few precise kinds of mixed charset files.


While converting to or from one of HTML, LaTeX or BibTeX charset, limit conversion to some subset of all characters. For HTML, limit conversion to the subset of all non-ASCII characters. For LaTeX or BibTeX, limit conversion to the subset of all non-English letters. This is particularly useful, for example, when people create what would be valid HTML, TeX or LaTeX files, if only they were using provided sequences for applying diacritics instead of using the diacriticised characters directly from the underlying character set.

While converting to HTML, LaTeX or BibTeX charset, this option assumes that characters not in the said subset are properly coded or protected already; Recode then transmits them literally. While converting the other way, this option prevents translating back coded or protected versions of characters not in the said subset. See HTML. See LaTeX. See BibTeX.


The bulk of the input file is expected to be written in ASCII, except for parts, like comments and string constants, which are written using another charset than ASCII. When language is ‘c’, the recoding will proceed only with the contents of comments or strings, while everything else will be copied without recoding. When language is ‘po’, the recoding will proceed only within translator comments (those having whitespace immediately following the initial ‘#’) and with the contents of msgstr strings.

For the above things to work, the non-ASCII encoding of the comment or string should be such that an ASCII scan will successfully find where the comment or string ends.

Even if ASCII is the usual charset for writing programs, some compilers are able to directly read other charsets, like UTF-8, say. There is currently no provision in Recode for reading mixed charset sources which are not based on ASCII. It is probable that the need for mixed recoding is not as pressing in such cases.

For example, after one does:

recode -Spo pc/..u8 < input.po > output.po

file output.po holds a copy of input.po in which only translator comments and the contents of msgstr strings have been recoded from the IBM-PC charset to pure UTF-8, without attempting conversion of end-of-lines. Machine generated comments and original msgid strings are not to be touched by this recoding.

If language is not specified, ‘c’ is assumed.

Next: , Previous: , Up: Invoking recode   [Contents][Index]

3.8 Using Recode within Emacs

The fact the recode program acts as a filter, when given no file arguments, makes it quite easy to use from within GNU Emacs. For example, recoding the whole buffer from the IBM-PC charset to current charset (for example, UTF-8 on Unix) is easily done with:

C-x h C-u M-| recode ibmpc RET

C-x h’ selects the whole buffer, and ‘C-u M-|’ filters and replaces the current region through the given shell command. Here is another example, binding the keys ‘C-c T to the recoding of the current region from Easy French to Latin-1 (on Unix) and the key ‘C-u C-c T from Latin-1 (on Unix) to Easy French:

(global-set-key "\C-cT" 'recode-texte)

(defun recode-texte (flag)
  (interactive "P")
   (region-beginning) (region-end)
   (concat "recode " (if flag "..txte" "txte")) t)

Previous: , Up: Invoking recode   [Contents][Index]

3.9 Debugging considerations

It is our experience that when Recode does not provide satisfying results, either the recode program was not called properly, correct results raised some doubts nevertheless, or files to recode were somewhat mangled. Genuine bugs are surely possible.

Unless you already are a Recode expert, it might be a good idea to quickly revisit the tutorial (see Tutorial) or the prior sections in this chapter, to make sure that you properly formatted your recoding request. In the case you intended to use Recode as a filter, make sure that you did not forget to redirect your standard input (through using the < symbol in the shell, say). Some Recode false mysteries are also easily explained, See Reversibility.

For the other cases, some investigation is needed. To illustrate how to proceed, let’s presume that you want to recode the nicepage file, coded UTF-8, into HTML. The problem is that the command ‘recode u8..h nicepage’ yields:

recode: Invalid input in step `UTF-8..ISO-10646-UCS-2'

One good trick is to use recode in filter mode instead of in file replacement mode, See Synopsis. Another good trick is to use the ‘-v’ option asking for a verbose description of the recoding steps. We could rewrite our recoding call as ‘recode -v u8..h <nicepage’, to get something like:

Request: UTF-8..:iconv:..ISO-10646-UCS-2..HTML_4.0
Shrunk to: UTF-8..ISO-10646-UCS-2..HTML_4.0
[…some output…]
recode: Invalid input in step `UTF-8..ISO-10646-UCS-2'

This might help you to better understand what the diagnostic means. The recoding request is achieved in two steps, the first recodes UTF-8 into UCS-2, the second recodes UCS-2 into HTML. The problem occurs within the first of these two steps, and since, the input of this step is the input file given to Recode, this is this overall input file which seems to be invalid. Also, when used in filter mode, Recode processes as much input as possible before the error occurs and sends the result of this processing to standard output. Since the standard output has not been redirected to a file, it is merely displayed on the user screen. By inspecting near the end of the resulting HTML output, that is, what was recoding a bit before the recoding was interrupted, you may infer about where the error stands in the real UTF-8 input file.

If you have the proper tools to examine the intermediate recoding data, you might also prefer to reduce the problem to a single step to better study it. This is what I usually do. For example, the last recode call above is more or less equivalent to:

recode -v UTF-8..ISO_10646-UCS-2 <nicepage >temporary
recode -v ISO_10646-UCS-2..HTML_4.0 <temporary
rm temporary

If you know that the problem is within the first step, you might prefer to concentrate on using the first recode line. If you know that the problem is within the second step, you might execute the first recode line once and for all, and then play with the second recode call, repeatedly using the temporary file created once by the first call.

Note that the ‘-f’ switch may be used to force the production of HTML output despite invalid input, it might be satisfying enough for you, and easier than repairing the input file. That depends on how strict you would like to be about the precision of the recoding process.

If you later see that your HTML file begins with ‘@lt;html@gt;’ when you expected ‘<html>’, then Recode might have done a bit more that you wanted. In this case, your input file was half-UTF-8, half-HTML already, that is, a mixed file (see Mixed). There is a special -d switch for this case. So, your might be end up calling ‘recode -fd nicepage’. Until you are quite sure that you accept overwriting your input file whatever what, I recommend that you stick with filter mode.

If, after such experiments, you seriously think that Recode does not behave properly, there might be a genuine bug either in the program or the library itself, in which case I invite you to to contribute a bug report, See Contributing.

Next: , Previous: , Up: Top   [Contents][Index]

4 A recoding library

The program named recode is just an application of its recoding library. The recoding library is available separately for other C programs. A good way to acquire some familiarity with the recoding library is to get acquainted with the recode program itself.

To use the recoding library once it is installed, a C program needs to have the following lines:

#include <stdlib.h>
#include <stdbool.h>
#include <recode.h>

const char *program_name;

near its beginning, and the user should have ‘-lrecode’ on the linking call, so modules from the recoding library are found.

The library contains four identifiable sets of routines: the outer level functions, the request level functions, the task level functions and the charset level functions. There are discussed in separate sections. For effectively using the recoding library in most applications, it should be rarely needed to study anything beyond the main initialisation function at outer level, and then, various functions at request level.

Next: , Previous: , Up: Library   [Contents][Index]

4.1 Outer level functions

The outer level functions mainly prepare the whole recoding library for use, or do actions which are unrelated to specific recodings. Here is an example of a program which does not really make anything useful.

#include <stdbool.h>
#include <stdlib.h>
#include <recode.h>

const char *program_name;

main (int argc, char *const *argv)
  program_name = argv[0];
  RECODE_OUTER outer = recode_new_outer (RECODE_AUTO_ABORT_FLAG);

  recode_delete_outer (outer);
  exit (EXIT_SUCCESS);

The header file recode.h declares an opaque RECODE_OUTER structure, which the programmer should use for allocating a variable in his program (let’s assume the programmer is a male, here, no prejudice intended). This ‘outer’ variable is given as a first argument to all outer level functions.

The RECODE_OUTER structure is really meant to be initialised only once in the life of a program, and terminated with the program itself. Program interfaces should pay attention to initialise it only once, would it be only for speed considerations. A good deal of overhead goes to outer level initialization, and if the outer level was initialized afresh for each and every string translated, say, the Recode library would appear immensely much slower that it was meant to be!

Because outer level initialization is meant to be done only once, not so much attention has been paid to avoid memory leaks at this level within Recode. This is hardly a reason for not plugging such leaks at any level: in the long run, they should all be chased and repaired.

Next: , Previous: , Up: Library   [Contents][Index]

4.2 Request level functions

The request level functions are meant to cover most recoding needs programmers may have; they should provide all usual functionality. Their API is almost stable by now.

To get started with request level functions, here is a full example of a program which sole job is to filter ibmpc code on its standard input into latin1 code on its standard output.

#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <recode.h>

const char *program_name;

main (int argc, char *const *argv)
  program_name = argv[0];
  RECODE_OUTER outer = recode_new_outer (RECODE_AUTO_ABORT_FLAG);
  RECODE_REQUEST request = recode_new_request (outer);
  bool success;

  recode_scan_request (request, "ibmpc..latin1");

  success = recode_file_to_file (request, stdin, stdout);

  recode_delete_request (request);
  recode_delete_outer (outer);

  exit (success ? EXIT_SUCCESS : EXIT_FAILURE);

The header file recode.h declares a RECODE_REQUEST structure, which the programmer should use for allocating a variable in his program. This request variable is given as a first argument to all request level functions, and in most cases, may be considered as opaque.

Suppose an application is doing a lot of recoding using only a few different requests. For speed considerations, the RECODE_REQUEST structure should ideally be cached for each kind of request, so the request level initialisation is not redone for each and every string translated. The speedup should be more apparent when Recode is able to optimize the work by building on the fly, within the structure, new specialized recoding steps and their associated data tables.

The following special function is still subject to change:

void recode_format_table (request, language, "name");

and is not documented anymore for now.

Next: , Previous: , Up: Library   [Contents][Index]

4.3 Task level functions

The task level functions are used internally by the request level functions, they allow more explicit control over files and memory buffers holding input and output to recoding processes. The interface specification of task level functions is still subject to change a bit.

To get started with task level functions, here is a full example of a program which sole job is to filter ibmpc code on its standard input into latin1 code on its standard output. That is, this program has the same goal as the one from the previous section, but does its things a bit differently.

#include <stdbool.h>
#include <stdlib.h>
#include <recodext.h>

const char *program_name;

main (int argc, char *const *argv)
  program_name = argv[0];
  RECODE_OUTER outer = recode_new_outer (0);
  RECODE_REQUEST request = recode_new_request (outer);
  bool success;

  recode_scan_request (request, "ibmpc..latin1");

  task = recode_new_task (request);
  task->input.name = "";
  task->output.name = "";
  success = recode_perform_task (task);

  recode_delete_task (task);
  recode_delete_request (request);
  recode_delete_outer (outer);

  exit (success ? EXIT_SUCCESS : EXIT_FAILURE);

Note that in the example above, recodext.h header is used instead of recode.h. By doing so, the various structures are not opaque anymore, and their fields may be accessed by name.

The header file recode.h declares a RECODE_TASK structure, which the programmer should use for allocating a variable in his program. This task variable is given as a first argument to all task level functions. The programmer ought to change and possibly consult a few fields in this structure, using special functions.

Next: , Previous: , Up: Library   [Contents][Index]

4.4 Charset level functions

Many functions are internal to the recoding library. Some of them have been made external and available, for the recode program had to retain all its previous functionality while being transformed into a mere application of the recoding library. These functions are not really documented here for the time being, as we hope that many of them will vanish over time. When this set of routines will stabilise, it would be convenient to document them as an API for handling charset names and contents.

RECODE_CHARSET find_charset (name, cleaning-type);
bool list_all_charsets (charset);
bool list_concise_charset (charset, list-format);
bool list_full_charset (charset);

Previous: , Up: Library   [Contents][Index]

4.5 Handling errors

The recode program, while using the Recode library, needs to control whether recoding problems are reported or not, and then reflect these in the exit status. The program should also instruct the library whether the recoding should be abruptly interrupted when an error is met (so sparing processing when it is known in advance that a wrong result would be discarded anyway), or if it should proceed nevertheless. Here is how the library groups errors into levels, listed here in order of increasing severity.


No error was met on previous library calls.


The input text was using one of the many alternative codings for some phenomenon, but not the one Recode would have canonically generated. So, if the reverse recoding is later attempted, it would produce a text having the same meaning as the original text, yet not being byte identical.

For example, a Base64 block in which end-of-lines appear elsewhere that at every 76 characters is not canonical. An e-circumflex in TeX which is coded as ‘\^{e}’ instead of ‘\^e’ is not canonical.


It has been discovered that if the reverse recoding was attempted on the text output by this recoding, we would not obtain the original text, only because an ambiguity was generated by accident in the output text. This ambiguity would then cause the wrong interpretation to be taken.

Here are a few examples. If the Latin-1 sequence ‘e^’ is converted to Easy French and back, the result will be interpreted as e-circumflex and so, will not reflect the intent of the original two characters. Recoding an IBM-PC text to Latin-1 and back, where the input text contained an isolated LF, will have a spurious CR inserted before the LF.

Currently, there are many cases in the library where the production of ambiguous output is not properly detected, as it is sometimes a difficult problem to accomplish this detection, or to do it speedily.


One or more input character could not be recoded, because there is just no representation for this character in the output charset.

Here are a few examples. Non-strict mode often allows Recode to compute on-the-fly mappings for unrepresentable characters, but strict mode prohibits such attribution of reversible translations: so strict mode might often trigger such an error. Most UCS-2 codes used to represent Asian characters cannot be expressed in various Latin charsets.

Since iconv does not distinguish untranslatable from invalid input, Recode has to use a workaround to detect when input is untranslatable. Unfortunately, it cannot currently tell how much input is untranslatable, so it cannot reliably skip such input: typically the input is then diagnosed as invalid. Two possible workarounds are to set the abort_level to RECODE_UNTRANSLATABLE, or not to use iconv.


The input text does not comply with the coding it is declared to hold. So, there is no way by which a reverse recoding would reproduce this text, because Recode should never produce invalid output.

Here are a few examples. In strict mode, ASCII text is not allowed to contain characters with the eight bit set. UTF-8 encodings ought to be minimal9.


The underlying system reported an error while the recoding was going on, likely an input/output error. (This error symbol is currently unused in the library.)


The programmer or user requested something the recoding library is unable to provide, or used the API wrongly. (This error symbol is currently unused in the library.)


Something really wrong, which should normally never happen, was detected within the recoding library. This might be due to genuine bugs in the library, or maybe due to un-initialised or overwritten arguments to the API. (This error symbol is currently unused in the library.)


This error code should never be returned, it is only internally used as a sentinel for the list of all possible error codes.

One should be able to set the error level threshold for returning failure at end of recoding, and also the threshold for immediate interruption. If many errors occur while the recoding proceed, which are not severe enough to interrupt the recoding, then the most severe error is retained, while others are forgotten10. So, in case of an error, the possible actions currently are:

See Task level, and particularly the description of the fields fail_level, abort_level and error_so_far, for more information about how errors are handled.

Next: , Previous: , Up: Top   [Contents][Index]

5 The universal charset

Standard ISO 10646 defines a universal character set, intended to encompass in the long run all languages written on this planet. It is based on wide characters, and offer possibilities for two billion characters (2^31).

This charset was to become available in Recode under the name UCS, with many external surfaces for it. But in the current version, only surfaces of UCS are offered, each presented as a genuine charset rather than a surface. Such surfaces are only meaningful for the UCS charset, so it is not that useful to draw a line between the surfaces and the only charset to which they may apply.

UCS stands for Universal Character Set. UCS-2 and UCS-4 are fixed length encodings, using two or four bytes per character respectively. UTF stands for UCS Transformation Format, and are variable length encodings dedicated to UCS. UTF-1 was based on ISO 2022, it did not succeed11. UTF-2 replaced it, it has been called UTF-FSS (File System Safe) in Unicode or Plan9 context, but is better known today as UTF-8. To complete the picture, there is UTF-16 based on 16 bits bytes, and UTF-7 which is meant for transmissions limited to 7-bit bytes. Most often, one might see UTF-8 used for external storage, and UCS-2 used for internal storage.

When Recode is producing any representation of UCS, it uses the replacement character U+FFFD for any valid character which is not representable in the goal charset12. This happens, for example, when UCS-2 is not capable to echo a wide UCS-4 character, or for a similar reason, an UTF-8 sequence using more than three bytes. The replacement character is meant to represent an existing character. So, it is never produced to represent an invalid sequence or ill-formed character in the input text. In such cases, Recode just gets rid of the noise, while taking note of the error in its usual ways.

Even if UTF-8 is an encoding, really, it is the encoding of a single character set, and nothing else. It is useful to distinguish between an encoding (a surface within Recode) and a charset, but only when the surface may be applied to several charsets. Specifying a charset is a bit simpler than specifying a surface in a Recode request. There would not be a practical advantage at imposing a more complex syntax to Recode users, when it is simple to assimilate UTF-8 to a charset. Similar considerations apply for UCS-2, UCS-4, UTF-16 and UTF-7. These are all considered to be charsets.

Next: , Previous: , Up: Universal   [Contents][Index]

5.1 Universal Character Set, 2 bytes

One surface of UCS is usable for the subset defined by its first sixty thousand characters (in fact, 31 * 2^11 codes), and uses exactly two bytes per character. It is a mere dump of the internal memory representation which is natural for this subset and as such, conveys with it endianness problems.

A non-empty UCS-2 file normally begins with a so called byte order mark, having value 0xFEFF. The value 0xFFFE is not an UCS character, so if this value is seen at the beginning of a file, Recode reacts by swapping all pairs of bytes. The library also properly reacts to other occurrences of 0xFEFF or 0xFFFE elsewhere than at the beginning, because concatenation of UCS-2 files should stay a simple matter, but it might trigger a diagnostic about non canonical input.

By default, when producing an UCS-2 file, Recode always outputs the high order byte before the low order byte. Note that this may not be the case if iconv is used; in that case, you may be able to use the charset UCS-2BE to specify big-endian UCS-2. The order can also be easily overridden through the 21-Permutation surface (see Permutations). For example, the command:

recode u8..u2/21 < input > output

asks for an UTF-8 to UCS-2 conversion, with swapped byte output.

Use UCS-2 as a genuine charset. This charset is available in Recode under the name ISO-10646-UCS-2. Accepted aliases are UCS-2, BMP, rune and u2.

The Recode library is able to combine UCS-2 some sequences of codes into single code characters, to represent a few diacriticized characters, ligatures or diphtongs which have been included to ease mapping with other existing charsets. It is also able to explode such single code characters into the corresponding sequence of codes. The request syntax for triggering such operations is rudimentary and temporary. The combined-UCS-2 pseudo character set is a special form of UCS-2 in which known combinings have been replaced by the simpler code. Using combined-UCS-2 instead of UCS-2 in an after position of a request forces a combining step, while using combined-UCS-2 instead of UCS-2 in a before position of a request forces an exploding step. For the time being, one has to resort to advanced request syntax to achieve other effects. For example:

recode u8..co,u2..u8 < input > output

copies an UTF-8 input over output, still to be in UTF-8, yet merging combining characters into single codes whenever possible.

Next: , Previous: , Up: Universal   [Contents][Index]

5.2 Universal Character Set, 4 bytes

Another surface of UCS uses exactly four bytes per character, and is a mere dump of the internal memory representation which is natural for the whole charset and as such, conveys with it endianness problems.

Use it as a genuine charset. This charset is available in Recode under the name ISO-10646-UCS-4. Accepted aliases are UCS, UCS-4, ISO_10646, 10646 and u4.

Next: , Previous: , Up: Universal   [Contents][Index]

5.3 Universal Transformation Format, 7 bits

UTF-7 comes from IETF rather than ISO, and is described by RFC 2152, in the MIME series. The UTF-7 encoding is meant to fit UCS-2 over channels limited to seven bits per byte. It proceeds from a mix between the spirit of Quoted-Printable and methods of Base64, adapted to Unicode contexts.

This charset is available in Recode under the name UNICODE-1-1-UTF-7. Accepted aliases are UTF-7, TF-7 and u7.

Next: , Previous: , Up: Universal   [Contents][Index]

5.4 Universal Transformation Format, 8 bits

Even if UTF-8 does not originally come from IETF, there is now RFC 2279 to describe it. In letters sent on 1995-01-21 and 1995-04-20, Markus Kuhn writes:

UTF-8 is an ASCII compatible multi-byte encoding of the ISO 10646 universal character set (UCS). UCS is a 31-bit superset of all other character set standards. The first 256 characters of UCS are identical to those of ISO 8859-1 (Latin-1). The UCS-2 encoding of UCS is a sequence of bigendian 16-bit words, the UCS-4 encoding is a sequence of bigendian 32-bit words. The UCS-2 subset of ISO 10646 is also known as “Unicode”. As both UCS-2 and UCS-4 require heavy modifications to traditional ASCII oriented system designs (e.g. Unix), the UTF-8 encoding has been designed for these applications.

In UTF-8, only ASCII characters are encoded using bytes below 128. All other non-ASCII characters are encoded as multi-byte sequences consisting only of bytes in the range 128-253. This avoids critical bytes like NUL and / in UTF-8 strings, which makes the UTF-8 encoding suitable for being handled by the standard C string library and being used in Unix file names. Other properties include the preserved lexical sorting order and that UTF-8 allows easy self-synchronisation of software receiving UTF-8 strings.

UTF-8 is the most common external surface of UCS, each character uses from one to six bytes, and is able to encode all 2^31 characters of the UCS. It is implemented as a charset, with the following properties:

These properties also have a few nice consequences:

In some case, when little processing is done on a lot of strings, one may choose for efficiency reasons to handle UTF-8 strings directly even if variable length, as it is easy to get start of characters. Character insertion or replacement might require moving the remainder of the string in either direction. In most cases, it is faster and easier to convert from UTF-8 to UCS-2 or UCS-4 prior to processing.

This charset is available in Recode under the name UTF-8. Accepted aliases are UTF-2, UTF-FSS, FSS_UTF, TF-8 and u8.

Next: , Previous: , Up: Universal   [Contents][Index]

5.5 Universal Transformation Format, 16 bits

Another external surface of UCS is also variable length, each character using either two or four bytes. It is usable for the subset defined by the first million characters (17 * 2^16) of UCS.

Martin J. Dürst writes (to comp.std.internat, on 1995-03-28):

UTF-16 is another method that reserves two times 1024 codepoints in Unicode and uses them to index around one million additional characters. UTF-16 is a little bit like former multibyte codes, but quite not so, as both the first and the second 16-bit code clearly show what they are. The idea is that one million codepoints should be enough for all the rare Chinese ideograms and historical scripts that do not fit into the Base Multilingual Plane of ISO 10646 (with just about 63,000 positions available, now that 2,000 are gone).

This charset is available in Recode under the name UTF-16. Accepted aliases are Unicode, TF-16 and u6.

Next: , Previous: , Up: Universal   [Contents][Index]

5.6 Frequency count of characters

A device may be used to obtain a list of characters in a file, and how many times each character appears. Each count is followed by the UCS-2 value of the character and, when known, the RFC 1345 mnemonic for that character.

This charset is available in Recode under the name count-characters.

This count feature has been implemented as a charset. This may change in some later version, as it would sometimes be convenient to count original bytes, instead of their UCS-2 equivalent.

Previous: , Up: Universal   [Contents][Index]

5.7 Fully interpreted UCS dump

Another device may be used to get fully interpreted dumps of an UCS-2 stream of characters, with one UCS-2 character displayed on a full output line. Each line receives the RFC 1345 mnemonic for the character if it exists, the UCS-2 value of the character, and a descriptive comment for that character. As each input character produces its own output line, beware that the output file from this conversion may be much, much bigger than the input file.

This charset is available in Recode under the name dump-with-names.

This dump-with-names feature has been implemented as a charset rather than a surface. This is surely debatable. The current implementation allows for dumping charsets other than UCS-2. For example, the command ‘recode l2..full < input implies a necessary conversion from Latin-2 to UCS-2, as dump-with-names is only connected out from UCS-2. In such cases, Recode does not display the original Latin-2 codes in the dump, only the corresponding UCS-2 values. To give a simpler example, the command

echo 'Hello, world!' | recode us..dump

produces the following output:

UCS2   Mne   Description

0048   H     latin capital letter h
0065   e     latin small letter e
006C   l     latin small letter l
006C   l     latin small letter l
006F   o     latin small letter o
002C   ,     comma
0020   SP    space
0077   w     latin small letter w
006F   o     latin small letter o
0072   r     latin small letter r
006C   l     latin small letter l
0064   d     latin small letter d
0021   !     exclamation mark
000A   LF    line feed (lf)

The descriptive comment is given in English and ASCII, yet if the English description is not available but a French one is, then the French description is given instead, using Latin-1. However, if the LC_MESSAGES environment variable begins with the letters ‘fr’, then listing preference goes to French when both descriptions are available.

Here is another example. To get the long description of the code 237 in Latin-5 table, one may use the following command.

echo -n 237 | recode l5/d..dump

If your echo does not grok ‘-n’, use ‘echo 237\c’ instead. Here is how to see what Unicode U+03C6 means, while getting rid of the title lines.

echo -n 0x03C6 | recode u2/x2..dump | tail +3

Next: , Previous: , Up: Top   [Contents][Index]

6 The iconv library

The Recode library is able to use the capabilities of an external, pre-installed iconv library, usually as provided by GNU libc or the portable libiconv written by Bruno Haible. In fact, many capabilities of the Recode library are duplicated in an external iconv library, as they likely share many charsets. We discuss, here, the issues related to this duplication, and other peculiarities specific to the iconv library.

The RECODE_STRICT_MAPPING_FLAG option, corresponding to the ‘--strict’ flag, is implemented by adding iconv option //IGNORE to the ‘after’ encoding. This has the side effect that untranslatable input is only signalled at the end of the conversion, whereas with Recode’s built-in conversion routines the error will be signalled immediately.

If the string -translit is appended to the after encoding, characters being converted are transliterated when needed and possible. This means that when a character cannot be represented in the target character set, it can be approximated through one or several similar looking characters. Characters that are outside of the target character set and cannot be transliterated are replaced with a question mark (?) in the output. This corresponds to the iconv option //TRANSLIT.

To check whether iconv is used for a particular conversion, just use the ‘-v’ or ‘--verbose’ option, see Recoding, and check whether ‘:iconv:’ appears as an intermediate charset.

The :iconv: charset represents a conceptual pivot charset within the external iconv library (in fact, this pivot exists, but is not directly reachable). This charset has a : (a mere colon) and :libiconv: for aliases. It is not allowed to recode from or to this charset directly. But when this charset is selected as an intermediate, usually by automatic means, then the external iconv Recode library is called to handle the transformations. By using an ‘--ignore=:iconv:’ option on the recode call or equivalently, but more simply, ‘-x:’, Recode is instructed to avoid this charset as an intermediate, with the consequence that the external iconv library is not used. You can also use --prefer-iconv to use iconv if possible. Consider these calls:

recode l1..1250 < input > output
recode -x: l1..1250 < input > output
recode --prefer-iconv l1..1250 < input > output

All should transform input from ISO-8859-1 to CP1250 on output. The first call might use the external iconv library, while the second call definitely avoids it. The third call will use the external iconv library if it supports the required conversion. Whatever the path used, the results should normally be identical. However, there might be observable differences. Most of them might result from reversibility issues, as the external iconv engine does not likely address reversibility in the same way. Even if much less likely, some differences might result from slight errors in the tables used, such differences should then be reported as bugs.

Discrepancies might be seen in the area of error detection and recovery. The Recode library usually tries to detect canonicity errors in input, and production of ambiguous output, but the external iconv library does not necessarily do it the same way. Moreover, the Recode library may not always recover as nicely as possible when the external iconv has no translation for a given character.

The external iconv libraries may offer different sets of charsets and aliases from one library to another, and also between successive versions of a single library. Best is to check the documentation of the external iconv library, as of the time Recode was installed, to know which charsets and aliases are being provided.

The ‘--ignore=:iconv:’ or ‘-x:’ options might be useful when there is a need to make a recoding more exactly repeatable between machines or installations, the idea being here to remove the variance possibly introduced by the various implementations of an external iconv library. These options might also help deciding whether if some recoding problem is genuine to Recode, or is induced by the external iconv library.

Next: , Previous: , Up: Top   [Contents][Index]

7 Tabular sources (RFC 1345)

An important part of the tabular charset knowledge in Recode comes from RFC 1345 or, alternatively, from the chset tools, both maintained by Keld Simonsen. The RFC 1345 document:

“Character Mnemonics & Character Sets”, K. Simonsen, Request for Comments no. 1345, Network Working Group, June 1992.

defines many character mnemonics and character sets. The Recode library implements most of RFC 1345, however:

Keld Simonsen keld@dkuug.dk did most of RFC 1345 himself, with some funding from Danish Standards and Nordic standards (INSTA) project. He also did the character set design work, with substantial input from Olle Jaernefors. Keld typed in almost all of the tables, some have been contributed. A number of people have checked the tables in various ways. The RFC lists a number of people who helped.

Keld and the Recode maintainer have an arrangement by which any new discovered information submitted by Recode users, about tabular charsets, is forwarded to Keld, eventually merged into Keld’s work, and only then, reimported into Recode. Recode does not try to compete, nor even establish itself as an alternate or diverging reference: RFC 1345 and its new drafts stay the genuine source for most tabular information conveyed by Recode. Keld has been more than collaborative so far, so there is no reason that we act otherwise. In a word, Recode should be perceived as the application of external references, but not as a reference in itself.

Internally, RFC 1345 associates which each character an unambiguous mnemonic of a few characters, taken from ISO 646, which is a minimal ASCII subset of 83 characters. The charset made up by these mnemonics is available in Recode under the name RFC1345. It has mnemonic and 1345 for aliases. As implemened, this charset exactly corresponds to mnemonic+ascii+38, using RFC 1345 nomenclature. Roughly said, ISO 646 characters represent themselves, except for the ampersand (&) which appears doubled. A prefix of a single ampersand introduces a mnemonic. For mnemonics using two characters, the prefix is immediately by the mnemonic. For longer mnemonics, the prefix is followed by an underline (_), the mmemonic, and another underline. Conversions to this charset are usually reversible.

Currently, Recode does not offer any of the many other possible variations of this family of representations.


367, ANSI_X3.4-1986, ASCII, CP367, IBM367, ISO646-US, ISO_646.irv:1991, US-ASCII, iso-ir-6 and us are aliases for this charset. Source: ISO 2375 registry.


ISO_9036, arabic7 and iso-ir-89 are aliases for this charset. Source: ISO 2375 registry.


ISO646-GB, gb, iso-ir-4 and uk are aliases for this charset. Source: ISO 2375 registry.


iso-ir-47 is an alias for this charset. Source: ISO 2375 registry.


1250, ms-ee and windows-1250 are aliases for this charset. Source: UNICODE 1.0.


1251, ms-cyrl and windows-1251 are aliases for this charset. Source: UNICODE 1.0.


1252, ms-ansi and windows-1252 are aliases for this charset. Source: UNICODE 1.0.


1253, ms-greek and windows-1253 are aliases for this charset. Source: UNICODE 1.0.


1254, ms-turk and windows-1254 are aliases for this charset. Source: UNICODE 1.0.


1255, ms-hebr and windows-1255 are aliases for this charset. Source: UNICODE 1.0.


1256, ms-arab and windows-1256 are aliases for this charset. Source: UNICODE 1.0.


1257, WinBaltRim and windows-1257 are aliases for this charset. Source: CEN/TC304 N283.


ISO646-CA, ca, csa7-1 and iso-ir-121 are aliases for this charset. Source: ISO 2375 registry.


ISO646-CA2, csa7-2 and iso-ir-122 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-123 is an alias for this charset. Source: ISO 2375 registry.


KOI-8_L2, iso-ir-139 and koi8l2 are aliases for this charset. Source: ISO 2375 registry.


CWI-2 and cp-hu are aliases for this charset. Source: Computerworld Sza’mita’stechnika vol 3 issue 13 1988-06-29.


dec is an alias for this charset. VAX/VMS User’s Manual, Order Number: AI-Y517A-TE, April 1986.


ISO646-DE, de and iso-ir-21 are aliases for this charset. Source: ISO 2375 registry.


DS2089, ISO646-DK and dk are aliases for this charset. Source: Danish Standard, DS 2089, February 1974.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


friss is an alias for this charset. Source: Skyrsuvelar Rikisins og Reykjavikurborgar, feb 1982.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


Source: IBM 3270 Char Set Ref Ch 10, GA27-2837-9, April 1987.


ECMA-113, ECMA-113:1986 and iso-ir-111 are aliases for this charset. Source: ISO 2375 registry.


ISO646-ES and iso-ir-17 are aliases for this charset. Source: ISO 2375 registry.


ISO646-ES2 and iso-ir-85 are aliases for this charset. Source: ISO 2375 registry.


ISO646-CN, cn and iso-ir-57 are aliases for this charset. Source: ISO 2375 registry.


ST_SEV_358-88 and iso-ir-153 are aliases for this charset. Source: ISO 2375 registry.


037, CP037, ebcdic-cp-ca, ebcdic-cp-nl, ebcdic-cp-us and ebcdic-cp-wt are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


038, CP038 and EBCDIC-INT are aliases for this charset. Source: IBM 3174 Character Set Ref, GA27-3831-02, March 1990.


1004, CP1004 and os2latin1 are aliases for this charset. Source: CEN/TC304 N283, 1994-02-04.


1026 and CP1026 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


1047 and CP1047 are aliases for this charset. Source: IBM Character Data Representation Architecture. Registry SC09-1391-00 p 150.


256, CP256 and EBCDIC-INT1 are aliases for this charset. Source: IBM Registry C-H 3-3220-050.


273 and CP273 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


274, CP274 and EBCDIC-BE are aliases for this charset. Source: IBM 3174 Character Set Ref, GA27-3831-02, March 1990.


275, CP275 and EBCDIC-BR are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


EBCDIC-CP-DK and EBCDIC-CP-NO are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


278, CP278, ebcdic-cp-fi and ebcdic-cp-se are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


280, CP280 and ebcdic-cp-it are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


281, CP281 and EBCDIC-JP-E are aliases for this charset. Source: IBM 3174 Character Set Ref, GA27-3831-02, March 1990.


284, CP284 and ebcdic-cp-es are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


285, CP285 and ebcdic-cp-gb are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


290, CP290 and EBCDIC-JP-kana are aliases for this charset. Source: IBM 3174 Character Set Ref, GA27-3831-02, March 1990.


297, CP297 and ebcdic-cp-fr are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


420, CP420 and ebcdic-cp-ar1 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990. IBM NLS RM p 11-11.


423, CP423 and ebcdic-cp-gr are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


424, CP424 and ebcdic-cp-he are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


437 and CP437 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


500, 500V1, CP500, ebcdic-cp-be and ebcdic-cp-ch are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


850 and CP850 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990. Source: UNICODE 1.0.


851 and CP851 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


852, CP852, pcl2 and pclatin2 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


855 and CP855 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


857 and CP857 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


860 and CP860 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


861, CP861 and cp-is are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


862 and CP862 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


863 and CP863 are aliases for this charset. Source: IBM Keyboard layouts and code pages, PN 07G4586 June 1991.


864 and CP864 are aliases for this charset. Source: IBM Keyboard layouts and code pages, PN 07G4586 June 1991.


865 and CP865 are aliases for this charset. Source: IBM DOS 3.3 Ref (Abridged), 94X9575 (Feb 1987).


868, CP868 and cp-ar are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


869, CP869 and cp-gr are aliases for this charset. Source: IBM Keyboard layouts and code pages, PN 07G4586 June 1991.


870, CP870, ebcdic-cp-roece and ebcdic-cp-yu are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


871, CP871 and ebcdic-cp-is are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


875, CP875 and EBCDIC-Greek are aliases for this charset. Source: UNICODE 1.0.


880, CP880 and EBCDIC-Cyrillic are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


891 and CP891 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


903 and CP903 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


904 and CP904 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


905, CP905 and ebcdic-cp-tr are aliases for this charset. Source: IBM 3174 Character Set Ref, GA27-3831-02, March 1990.


918, CP918 and ebcdic-cp-ar2 are aliases for this charset. Source: IBM NLS RM Vol2 SE09-8002-01, March 1990.


iso-ir-143 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-49 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-50 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-51 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-170 is an alias for this charset.


819, CP819, IBM819, ISO8859-1, ISO_8859-1, ISO_8859-1:1987, iso-ir-100, l1 and latin1 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-10, ISO_8859-10, ISO_8859-10:1993, L6, iso-ir-157 and latin6 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-13, ISO_8859-13, ISO_8859-13:1998, iso-baltic, iso-ir-179a, l7 and latin7 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-14, ISO_8859-14, ISO_8859-14:1998, iso-celtic, iso-ir-199, l8 and latin8 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-15, ISO_8859-15, ISO_8859-15:1998, iso-ir-203, l9 and latin9 are aliases for this charset. Source: ISO 2375 registry.


912, CP912, IBM912, ISO8859-2, ISO_8859-2, ISO_8859-2:1987, iso-ir-101, l2 and latin2 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-3, ISO_8859-3, ISO_8859-3:1988, iso-ir-109, l3 and latin3 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-4, ISO_8859-4, ISO_8859-4:1988, iso-ir-110, l4 and latin4 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-5, ISO_8859-5, ISO_8859-5:1988, cyrillic and iso-ir-144 are aliases for this charset. Source: ISO 2375 registry.


ASMO-708, ECMA-114, ISO8859-6, ISO_8859-6, ISO_8859-6:1987, arabic and iso-ir-127 are aliases for this charset. Source: ISO 2375 registry.


ECMA-118, ELOT_928, ISO8859-7, ISO_8859-7, ISO_8859-7:1987, greek, greek8 and iso-ir-126 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-8, ISO_8859-8, ISO_8859-8:1988, hebrew and iso-ir-138 are aliases for this charset. Source: ISO 2375 registry.


ISO8859-9, ISO_8859-9, ISO_8859-9:1989, iso-ir-148, l5 and latin5 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-155 is an alias for this charset. Source: ISO 2375 registry.


e13b and iso-ir-98 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-37 is an alias for this charset. Source: ISO 2375 registry.


ISO_5427:1981 and iso-ir-54 are aliases for this charset. Source: ISO 2375 registry.


ISO_5428:1980 and iso-ir-55 are aliases for this charset. Source: ISO 2375 registry.


ISO_646.basic:1983 and ref are aliases for this charset. Source: ISO 2375 registry.


ISO_646.irv:1983, irv and iso-ir-2 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-152 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-154 and latin1-2-5 are aliases for this charset. Source: ISO 2375 registry.


ISO646-IT and iso-ir-15 are aliases for this charset. Source: ISO 2375 registry.


JIS_C6220-1969, iso-ir-13, katakana and x0201-7 are aliases for this charset. Source: ISO 2375 registry.


ISO646-JP, iso-ir-14 and jp are aliases for this charset. Source: ISO 2375 registry.


jp-ocr-a is an alias for this charset. Source: ISO 2375 registry.


ISO646-JP-OCR-B and jp-ocr-b are aliases for this charset. Source: ISO 2375 registry.


iso-ir-93 and jp-ocr-b-add are aliases for this charset. Source: ISO 2375 registry.


iso-ir-94 and jp-ocr-hand are aliases for this charset. Source: ISO 2375 registry.


iso-ir-95 and jp-ocr-hand-add are aliases for this charset. Source: ISO 2375 registry.


iso-ir-96 is an alias for this charset. Source: ISO 2375 registry.


X0201 is an alias for this charset.


ISO646-YU, iso-ir-141, js and yu are aliases for this charset. Source: ISO 2375 registry.


iso-ir-147 and macedonian are aliases for this charset. Source: ISO 2375 registry.


iso-ir-146 and serbian are aliases for this charset. Source: ISO 2375 registry.


Source: Andrey A. Chernov <ache@nagual.pp.ru>.


GOST_19768-74 is an alias for this charset. Source: Andrey A. Chernov <ache@nagual.pp.ru>.


Source: RFC1489 via Gabor Kiss <kissg@sztaki.hu>. And Andrey A. Chernov <ache@nagual.pp.ru>.


Source: http://cad.ntu-kpi.kiev.ua/multiling/koi8-ru/.


Source: RFC 2319. Mibenum: 2088. Source: http://www.net.ua/KOI8-U/.


ISO646-KR is an alias for this charset.


iso-ir-27 is an alias for this charset. Source: ISO 2375 registry.


ISO646-HU, hu and iso-ir-86 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-9-1 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-9-2 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-8-1 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-8-2 is an alias for this charset. Source: ISO 2375 registry.


ISO646-CU, NC_NC00-10:81, cuba and iso-ir-151 are aliases for this charset. Source: ISO 2375 registry.


ISO646-FR, fr and iso-ir-69 are aliases for this charset. Source: ISO 2375 registry.


ISO646-FR1 and iso-ir-25 are aliases for this charset. Source: ISO 2375 registry.


ISO646-NO, iso-ir-60 and no are aliases for this charset. Source: ISO 2375 registry.


ISO646-NO2, iso-ir-61 and no2 are aliases for this charset. Source: ISO 2375 registry.


next is an alias for this charset. Source: Peter Svanberg - psv@nada.kth.se.


ISO646-PT and iso-ir-16 are aliases for this charset. Source: ISO 2375 registry.


ISO646-PT2 and iso-ir-84 are aliases for this charset. Source: ISO 2375 registry.


FI, ISO646-FI, ISO646-SE, SS636127, iso-ir-10 and se are aliases for this charset. Source: ISO 2375 registry.


ISO646-SE2, iso-ir-11 and se2 are aliases for this charset. Source: ISO 2375 registry.


iso-ir-102 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-179 is an alias for this charset. Source: ISO 2375 registry. &g1esc x2d56 &g2esc x2e56 &g3esc x2f56.


iso-ir-150 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-88 is an alias for this charset. Source: ISO 2375 registry.


iso-ir-18 is an alias for this charset. Source: ISO 2375 registry.


r8 and roman8 are aliases for this charset. Source: LaserJet IIP Printer User’s Manual,. HP part no 33471-90901, Hewlet-Packard, June 1989.


iso-ir-19 is an alias for this charset. Source: ISO 2375 registry.


mac is an alias for this charset. Source: The Unicode Standard ver 1.0, ISBN 0-201-56788-1, Oct 1991.


macce is an alias for this charset. Source: Macintosh CE fonts.


iso-ir-158, lap and latin-lap are aliases for this charset. Source: ISO 2375 registry.

Next: , Previous: , Up: Top   [Contents][Index]

8 ASCII and some derivatives

Next: , Previous: , Up: ASCII misc   [Contents][Index]

8.1 Usual ASCII

This charset is available in Recode under the name ASCII. In fact, its true name is ANSI_X3.4-1968 as per RFC 1345, accepted aliases being ANSI_X3.4-1986, ASCII, IBM367, ISO646-US, ISO_646.irv:1991, US-ASCII, cp367, iso-ir-6 and us. The shortest way of specifying it in Recode is us.

This documentation used to include ASCII tables. They have been removed since the recode program can now recreate these easily:

recode -lf us                   for commented ASCII
recode -ld us                   for concise decimal table
recode -lo us                   for concise octal table
recode -lh us                   for concise hexadecimal table

Next: , Previous: , Up: ASCII misc   [Contents][Index]

8.2 ASCII extended by Latin Alphabets

There are many Latin charsets. The following has been written by Tim Lasko lasko@video.dec.com, a long while ago:

ISO Latin-1, or more completely ISO Latin Alphabet No 1, is now an international standard as of February 1987 (IS 8859, Part 1). For those American USEnet’rs that care, the 8-bit ASCII standard, which is essentially the same code, is going through the final administrative processes prior to publication. ISO Latin-1 (IS 8859/1) is actually one of an entire family of eight-bit one-byte character sets, all having ASCII on the left hand side, and with varying repertoires on the right hand side:

The ISO Latin Alphabet 1 is available as a charset in Recode under the name Latin-1. In fact, its true name is ISO_8859-1:1987 as per RFC 1345, accepted aliases being CP819, IBM819, ISO-8859-1, ISO_8859-1, iso-ir-100, l1 and Latin-1. The shortest way of specifying it in Recode is l1.

It is an eight-bit code which coincides with ASCII for the lower half. This documentation used to include Latin-1 tables. They have been removed since the recode program can now recreate these easily:

recode -lf l1                   for commented ISO Latin-1
recode -ld l1                   for concise decimal table
recode -lo l1                   for concise octal table
recode -lh l1                   for concise hexadecimal table

Next: , Previous: , Up: ASCII misc   [Contents][Index]

8.3 ASCII 7-bits, BS to overstrike

This charset is available in Recode under the name ASCII-BS, with BS as an acceptable alias.

The file is straight ASCII, seven bits only. According to the definition of ASCII, diacritics are applied by a sequence of three characters: the letter, one BS, the diacritic mark. We deviate slightly from this by exchanging the diacritic mark and the letter so, on a screen device, the diacritic will disappear and let the letter alone. At recognition time, both methods are acceptable.

The French quotes are coded by the sequences: < BS " or " BS < for the opening quote and > BS " or " BS > for the closing quote. This artificial convention was inherited in straight ASCII-BS from habits around Bang-Bang entry, and is not well known. But we decided to stick to it so that ASCII-BS charset will not lose French quotes.

The ASCII-BS charset is independent of ASCII, and different. The following examples demonstrate this, knowing at advance that ‘!2’ is the Bang-Bang way of representing an e with an acute accent. Compare:

% echo \!2 | recode -v bang..l1/d
Request: Bang-Bang..ISO-8859-1/Decimal-1
233,  10


% echo \!2 | recode -v bang..bs/d
Request: Bang-Bang..ISO-8859-1..ASCII-BS/Decimal-1
 39,   8, 101,  10

In the first case, the e with an acute accent is merely transmitted by the Latin-1..ASCII mapping, not having a special recoding rule for it. In the Latin-1..ASCII-BS case, the acute accent is applied over the e with a backspace: diacriticised characters have special rules. For the ASCII-BS charset, reversibility is still possible, but there might be difficult cases.

Previous: , Up: ASCII misc   [Contents][Index]

8.4 ASCII without diacritics nor underline

This charset is available in Recode under the name flat.

This code is ASCII expunged of all diacritics and underlines, as long as they are applied using three character sequences, with BS in the middle. Also, despite slightly unrelated, each control character is represented by a sequence of two or three graphic characters. The newline character, however, keeps its functionality and is not represented.

Note that charset flat is a terminal charset. We can convert to flat, but not from it.

Next: , Previous: , Up: Top   [Contents][Index]

9 Some IBM or Microsoft charsets

Recode provides various IBM or Microsoft code pages (see Tabular). An easy way to find them all at once out of Recode itself is through the command:

recode -l | egrep -i '(CP|IBM)[0-9]'

But also, see few special charsets presented in the incoming sections.

Next: , Previous: , Up: IBM and MS   [Contents][Index]

9.1 EBCDIC code

This charset is the IBM’s External Binary Coded Decimal for Interchange Coding. This is an eight bits code. The following three variants were implemented in Recode independently of RFC 1345:


In Recode, the us..ebcdic conversion is identical to ‘dd conv=ebcdic’ conversion, and Recode ebcdic..us conversion is identical to ‘dd conv=ascii’ conversion. This charset also represents the way Control Data Corporation relates EBCDIC to 8-bits ASCII.


In Recode, the us..ebcdic-ccc or ebcdic-ccc..us conversions represent the way Concurrent Computer Corporation (formerly Perkin Elmer) relates EBCDIC to 8-bits ASCII.


In Recode, the us..ebcdic-ibm conversion is almost identical to the GNU ‘dd conv=ibm’ conversion. Given the exact ‘dd conv=ibm’ conversion table, Recode once said:

Codes  91 and 213 both recode to 173
Codes  93 and 229 both recode to 189
No character recodes to  74
No character recodes to 106

So I arbitrarily chose to recode 213 by 74 and 229 by 106. This makes the EBCDIC-IBM recoding reversible, but this is not necessarily the best correction. In any case, I think that GNU dd should be amended. dd and Recode should ideally agree on the same correction. So, this table might change once again.

RFC 1345 brings into Recode 15 other EBCDIC charsets, and 21 other charsets having EBCDIC in at least one of their alias names. You can get a list of all these by executing:

recode -l | grep -i ebcdic

Note that Recode may convert a pure stream of EBCDIC characters, but it does not know how to handle binary data between records which is sometimes used to delimit them and build physical blocks. If end of lines are not marked, fixed record size may produce something readable, but VB or VBS blocking is likely to yield some garbage in the converted results.

Next: , Previous: , Up: IBM and MS   [Contents][Index]

9.2 IBM’s PC code

This charset is available in Recode under the name IBM-PC, with dos, MSDOS and pc as acceptable aliases. The shortest way of specifying it in Recode is pc.

The charset is aimed towards a PC microcomputer from IBM or any compatible. This is an eight-bit code. This charset is fairly old in Recode, its tables were produced a long while ago by mere inspection of a printed chart of the IBM-PC codes and glyph.

It has CR-LF as its implied surface. This means that, if the original end of lines have to be preserved while going out of IBM-PC, they should currently be added back through the usage of a surface on the other charset, or better, just never removed. Here are examples for both cases:

recode pc..l2/cl < input > output
recode pc/..l2 < input > output

RFC 1345 brings into Recode 44 ‘IBM’ charsets or code pages, and also 8 other code pages. You can get a list of these all these by executing:13

recode -l | egrep -i '(CP|IBM)[0-9]'

All charset or aliases beginning with letters ‘CP’ or ‘IBM’ also have CR-LF as their implied surface. The same is true for a purely numeric alias in the same family. For example, all of 819, CP819 and IBM819 imply CR-LF as a surface. Note that ISO-8859-1 does not imply a surface, despite it shares the same tabular data as 819.

There are a few discrepancies between this IBM-PC charset and the very similar RFC 1345 charset ibm437. The IBM-PC charset has two extra characters at positions 20 (Latin-1 0xB6, Pilcrow) and 21 (Latin-1 0xA7, Section sign); further, it has position 250 as 0xB7, middle dot, while ibm437 has middle dot at position 249. According to this comparison of code tables: https://www.haible.de/bruno/charsets/conversion-tables/CP437.html the source for RFC 1345, dkuug.dk/IBM437.TXT is the only source that thus defines this mapping.

Previous: , Up: IBM and MS   [Contents][Index]

9.3 Unisys’ Icon code

This charset is available in Recode under the name Icon-QNX, with QNX as an acceptable alias.

The file is using Unisys’ Icon way to represent diacritics with code 25 escape sequences, under the system QNX. This is a seven-bit code, even if eight-bit codes can flow through as part of IBM-PC charset.

Next: , Previous: , Up: Top   [Contents][Index]

10 Charsets for CDC machines

What is now Recode evolved out, through many transformations really, from a set of programs which were originally written in COMPASS, Control Data Corporation’s assembler, with bits in FORTRAN, and later rewritten in CDC 6000 Pascal. The CDC heritage shows by the fact some old CDC charsets are still supported.

The Recode author used to be familiar with CDC Scope-NOS/BE and Kronos-NOS, and many CDC formats. Reading CDC tapes directly on other machines is often a challenge, and Recode does not always solve it. It helps having tapes created in coded mode instead of binary mode, and using S (Stranger) tapes instead of I (Internal) tapes. ANSI labels and multi-file tapes might be the source of trouble. There are ways to handle a few Cyber Record Manager formats, but some of them might be quite difficult to decode properly after the transfer is done.

Recode is usable only for a small subset of NOS text formats, and surely not with binary textual formats, like UPDATE or MODIFY sources, for example. Recode is not especially suited for reading 8/12 or 56/60 packing, yet this could easily arranged if there was a demand for it. It does not have the ability to translate Display Code directly, as the ASCII conversion implied by tape drivers or FTP does the initial approximation. Recode can decode 6/12 caret notation over Display Code already mapped to ASCII.

Next: , Previous: , Up: CDC   [Contents][Index]

10.1 Control Data’s Display Code

This code is not available in Recode, but repeated here for reference. This is a 6-bit code used on CDC mainframes.

Octal display code to graphic       Octal display code to octal ASCII

00  :    20  P    40  5   60  #     00 072  20 120  40 065  60 043
01  A    21  Q    41  6   61  [     01 101  21 121  41 066  61 133
02  B    22  R    42  7   62  ]     02 102  22 122  42 067  62 135
03  C    23  S    43  8   63  %     03 103  23 123  43 070  63 045
04  D    24  T    44  9   64  "     04 104  24 124  44 071  64 042
05  E    25  U    45  +   65  _     05 105  25 125  45 053  65 137
06  F    26  V    46  -   66  !     06 106  26 126  46 055  66 041
07  G    27  W    47  *   67  &     07 107  27 127  47 052  67 046
10  H    30  X    50  /   70  '     10 110  30 130  50 057  70 047
11  I    31  Y    51  (   71  ?     11 111  31 131  51 050  71 077
12  J    32  Z    52  )   72  <     12 112  32 132  52 051  72 074
13  K    33  0    53  $   73  >     13 113  33 060  53 044  73 076
14  L    34  1    54  =   74  @     14 114  34 061  54 075  74 100
15  M    35  2    55      75  \     15 115  35 062  55 040  75 134
16  N    36  3    56  ,   76  ^     16 116  36 063  56 054  76 136
17  O    37  4    57  .   77  ;     17 117  37 064  57 056  77 073

In older times, : used octal 63, and octal 0 was not a character. The table above shows the ASCII glyph interpretation of codes 60 to 77, yet these 16 codes were once defined differently.

There is no explicit end of line in Display Code, and the Cyber Record Manager introduced many new ways to represent them, the traditional end of lines being reachable by setting RT to ‘Z’. If 6-bit bytes in a file are sequentially counted from 1, a traditional end of line does exist if bytes 10*n+9 and 10n+10 are both zero for a given n, in which case these two bytes are not to be interpreted as ::. Also, up to 9 immediately preceeding zero bytes, going backward, are to be considered as part of the end of line and not interpreted as :14.

Next: , Previous: , Up: CDC   [Contents][Index]

10.2 ASCII 6/12 from NOS

This charset is available in Recode under the name CDC-NOS, with NOS as an acceptable alias.

This is one of the charsets in use on CDC Cyber NOS systems to represent ASCII, sometimes named NOS 6/12 code for coding ASCII. This code is also known as caret ASCII. It is based on a six bits character set in which small letters and control characters are coded using a ^ escape and, sometimes, a @ escape.

The routines given here presume that the six bits code is already expressed in ASCII by the communication channel, with embedded ASCII ^ and @ escapes.

Here is a table showing which characters are being used to encode each ASCII character.

000  ^5  020  ^#  040     060  0  100 @A  120  P  140  @G  160  ^P
001  ^6  021  ^[  041  !  061  1  101  A  121  Q  141  ^A  161  ^Q
002  ^7  022  ^]  042  "  062  2  102  B  122  R  142  ^B  162  ^R
003  ^8  023  ^%  043  #  063  3  103  C  123  S  143  ^C  163  ^S
004  ^9  024  ^"  044  $  064  4  104  D  124  T  144  ^D  164  ^T
005  ^+  025  ^_  045  %  065  5  105  E  125  U  145  ^E  165  ^U
006  ^-  026  ^!  046  &  066  6  106  F  126  V  146  ^F  166  ^V
007  ^*  027  ^&  047  '  067  7  107  G  127  W  147  ^G  167  ^W
010  ^/  030  ^'  050  (  070  8  110  H  130  X  150  ^H  170  ^X
011  ^(  031  ^?  051  )  071  9  111  I  131  Y  151  ^I  171  ^Y
012  ^)  032  ^<  052  *  072 @D  112  J  132  Z  152  ^J  172  ^Z
013  ^$  033  ^>  053  +  073  ;  113  K  133  [  153  ^K  173  ^0
014  ^=  034  ^@  054  ,  074  <  114  L  134  \  154  ^L  174  ^1
015  ^   035  ^\  055  -  075  =  115  M  135  ]  155  ^M  175  ^2
016  ^,  036  ^^  056  .  076  >  116  N  136 @B  156  ^N  176  ^3
017  ^.  037  ^;  057  /  077  ?  117  O  137  _  157  ^O  177  ^4

Previous: , Up: CDC   [Contents][Index]

10.3 ASCII “bang bang”

This charset is available in Recode under the name Bang-Bang.

This code, in use on Cybers at Université de Montréal mainly, served to code a lot of French texts. The original name of this charset is ASCII codé Display. This code is also known as Bang-bang. It is based on a six bits character set in which capitals, French diacritics and a few others are coded using an ! escape followed by a single character, and control characters using a double ! escape followed by a single character.

The routines given here presume that the six bits code is already expressed in ASCII by the communication channel, with embedded ASCII ! escapes.

Here is a table showing which characters are being used to encode each ASCII character.

000 !!@  020 !!P  040    060 0  100 @   120 !P  140 !@ 160 P
001 !!A  021 !!Q  041 !" 061 1  101 !A  121 !Q  141 A  161 Q
002 !!B  022 !!R  042 "  062 2  102 !B  122 !R  142 B  162 R
003 !!C  023 !!S  043 #  063 3  103 !C  123 !S  143 C  163 S
004 !!D  024 !!T  044 $  064 4  104 !D  124 !T  144 D  164 T
005 !!E  025 !!U  045 %  065 5  105 !E  125 !U  145 E  165 U
006 !!F  026 !!V  046 &  066 6  106 !F  126 !V  146 F  166 V
007 !!G  027 !!W  047 '  067 7  107 !G  127 !W  147 G  167 W
010 !!H  030 !!X  050 (  070 8  110 !H  130 !X  150 H  170 X
011 !!I  031 !!Y  051 )  071 9  111 !I  131 !Y  151 I  171 Y
012 !!J  032 !!Z  052 *  072 :  112 !J  132 !Z  152 J  172 Z
013 !!K  033 !![  053 +  073 ;  113 !K  133 [   153 K  173 ![
014 !!L  034 !!\  054 ,  074 <  114 !L  134 \   154 L  174 !\
015 !!M  035 !!]  055 -  075 =  115 !M  135 ]   155 M  175 !]
016 !!N  036 !!^  056 .  076 >  116 !N  136 ^   156 N  176 !^
017 !!O  037 !!_  057 /  077 ?  117 !O  137 _   157 O  177 !_

Next: , Previous: , Up: Top   [Contents][Index]

11 Other micro-computer charsets

Next: , Previous: , Up: Micros   [Contents][Index]

11.1 Apple’s Macintosh code

RFC 1345 brings 2 Macintosh charsets. You can discover them by using grep over the output of ‘recode -l’:

recode -l | grep -i mac

Charsets macintosh and macintosh_ce, as well as their aliases mac and macce have CR as their implied surface.

Previous: , Up: Micros   [Contents][Index]

11.2 Atari ST code

This charset is available in Recode under the name AtariST.

This is the character set used on the Atari ST/TT/Falcon. This is similar to IBM-PC, but differs in some details: it includes some more accented characters, the graphic characters are mostly replaced by Hebrew characters, and there is a true German sharp s different from Greek beta.

About the end-of-line conversions: the canonical end-of-line on the Atari is ‘\r\n’, but unlike IBM-PC, the OS makes no difference between text and binary input/output; it is up to the application how to interpret the data. In fact, most of the libraries that come with compilers can grok both ‘\r\n’ and ‘\n’ as end of lines. Many of the users who also have access to Unix systems prefer ‘\n’ to ease porting Unix utilities. So, for easing reversibility, Recode tries to let ‘\r’ undisturbed through recodings.

Next: , Previous: , Up: Top   [Contents][Index]

12 Various other charsets

A few charsets do not fit well in the previous chapters, and are grouped here. Some of them were added to Recode long ago, at a time this tool was mainly meant for handling texts written in French. The bias still shows when these charsets are linked to Latin-1 instead of the wider Unicode, but this is being corrected as Recode evolves.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.1 World Wide Web representations

Character entities have been introduced by SGML and made widely popular through HTML, the markup language in use for the World Wide Web, or Web or WWW for short. For representing unusual characters, HTML texts use special sequences, beginning with an ampersand & and ending with a semicolon ;. The sequence may itself start with a number sigh # and be followed by digits, so forming a numeric character reference, or else be an alphabetic identifier, so forming a character entity reference.

The HTML standards have been revised into different HTML levels over time, and the list of allowable character entities differ in them. The later XML, meant to simplify many things, has an option (‘standalone=yes’) which much restricts that list. The Recode library is able to convert character references between their mnemonic form and their numeric form, depending on aimed HTML standard level. It also can, of course, convert between HTML and various other charsets.

Here is a list of those HTML variants which Recode supports. Some notes have been provided by François Yergeau yergeau@alis.com.


This charset is available in Recode under the name XML-standalone, with h0 as an acceptable alias. It is documented in section 4.1 of http://www.w3.org/TR/REC-xml. It only knows ‘&amp;’, ‘&gt;’, ‘&lt;’, ‘&quot;’ and ‘&apos;’.


This charset is available in Recode under the name HTML_1.1, with h1 as an acceptable alias. HTML 1.0 was never really documented.


This charset is available in Recode under the name HTML_2.0, and has RFC1866, 1866 and h2 for aliases. HTML 2.0 entities are listed in RFC 1866. Basically, there is an entity for each alphabetical character in the right part of ISO 8859-1. In addition, there are four entities for syntax-significant ASCII characters: ‘&amp;’, ‘&gt;’, ‘&lt;’ and ‘&quot;’.


This charset is available in Recode under the name HTML-i18n, and has RFC2070 and 2070 for aliases. RFC 2070 added entities to cover the whole right part of ISO 8859-1. The list is conveniently accessible at http://www.alis.com:8085/ietf/html/html-latin1.sgml. In addition, four i18n-related entities were added: ‘&zwnj;’ (‘&#8204;’), ‘&zwj;’ (‘&#8205;’), ‘&lrm;’ (‘&#8206’) and ‘&rlm;’ (‘&#8207;’).


This charset is available in Recode under the name HTML_3.2, with h3 as an acceptable alias. HTML 3.2 took up the full Latin-1 list but not the i18n-related entities from RFC 2070.


This charset is available in Recode under the name HTML_4.0, and has h4 and h for aliases. Beware that the particular alias h is not tied to HTML 4.0, but to the highest HTML level supported by Recode; so it might later represent HTML level 5 if this is ever created. HTML 4.0 has the whole Latin-1 list, a set of entities for symbols, mathematical symbols, and Greek letters, and another set for markup-significant and internationalization characters comprising the 4 ASCII entities, the 4 i18n-related from RFC 2070 plus some more. See http://www.w3.org/TR/REC-html40/sgml/entities.html.

Printable characters from Latin-1 may be used directly in an HTML text. However, partly because people have deficient keyboards, partly because people want to transmit HTML texts over non 8-bit clean channels while not using MIME, it is common (yet debatable) to use character entity references even for Latin-1 characters, when they fall outside ASCII (that is, when they have the 8th bit set).

When you recode from another charset to HTML, beware that all occurrences of double quotes, ampersands, and left or right angle brackets are translated into special sequences. However, in practice, people often use ampersands and angle brackets in the other charset for introducing HTML commands, compromising it: it is not pure HTML, not it is pure other charset. These particular translations can be rather inconvenient, they may be specifically inhibited through the command option ‘-d’ (see Mixed).

Codes not having a mnemonic entity are output by Recode using the ‘&#nnn;’ notation, where nnn is a decimal representation of the UCS code value. When there is an entity name for a character, it is always preferred over a numeric character reference. ASCII printable characters are always generated directly. So is the newline. While reading HTML, Recode supports numeric character reference as alternate writings, even when written as hexadecimal numbers, as in ‘&#xfffd’. This is documented in:


When Recode translates to HTML, the translation occurs according to the HTML level as selected by the goal charset. When translating from HTML, Recode not only accepts the character entity references known at that level, but also those of all other levels, as well as a few alternative special sequences, to be forgiving to files using other HTML standards.

Recode can be used to normalise an HTML file using oldish conventions. For example, it accepts ‘&AE;’, as this once was a valid writing, somewhere. However, it should always produce ‘&AElig;’ instead of ‘&AE;’. Yet, this is not completely true. If one does:

recode h3..h3 < input

the operation will be optimised into a mere copy, and you can get ‘&AE;’ this way, if you had some in your input file. But if you explicitly defeat the optimisation, like this maybe:

recode h3..u2,u2..h3 < input

then ‘&AE;’ should be normalised into ‘&AElig;’ by the operation.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.2 LaTeX macro calls

This charset is available in Recode under the name LaTeX and has ltex as an alias. It is used for ASCII files coded to be read by LaTeX or, in certain cases, by TeX.

Whenever you recode from another charset to LaTeX, beware that all occurrences of backslashes \ are translated into the string ‘\backslash{}’. However, in practice, people often use backslashes in the other charset for introducing TeX commands, compromising it: it is not pure TeX, nor it is pure other charset. This translation of backslashes into ‘\backslash{}’ can be rather inconvenient, it may be inhibited through the command option ‘-d’ (see Mixed).

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.3 BibTeX macro calls

This charset is available in Recode under the name BibTeX with aliases bibtex and btex. It is used for ASCII files coded to be read by BibTeX or, in certain cases, by LaTeX or TeX.

This charset is very similar to LaTeX. The only difference is that diacritics are enclosed between ‘{}’. Refer to LaTeX charset for further information. See LaTeX.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.4 GNU project documentation files

This charset is available in Recode under the name Texinfo and has texi and ti for aliases. It is used by the GNU project for its documentation. Texinfo files may be converted into Info files by the makeinfo program and into nice printed manuals by the TeX system.

Even if Recode may transform other charsets to Texinfo, it may not read Texinfo files yet. In these times, usages are also changing between versions of Texinfo, and Recode only partially succeeds in correctly following these changes. So, for now, Texinfo support in Recode should be considered as work still in progress (!).

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.5 Bibliographic character sets

These two character sets are intended to work with ASCII for exchange of bibliographic information.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.6 Vietnamese charsets

We are currently trying the implementation, in Recode, of a few character sets and transliterated forms to handle the Vietnamese language. They are quite briefly summarised, here.


The TCVN charset has an incomplete name. It might be one of the three charset VN1, VN2 or VN3. Yes VN2 might be a second version of VISCII. To be clarified.


This is an 8-bit character set which seems to be rather popular for writing Vietnamese.


This is an 8-bit character set for Vietnamese. No much reference.


The VIQR convention is a 7-bit, ASCII transliteration for Vietnamese.


The VNI convention is a 8-bit, Latin-1 transliteration for Vietnamese.

Still lacking for Vietnamese in Recode, are the charsets CP1129 and CP1258.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.7 African charsets

Some African character sets are available for a few languages, when these are heavily used in countries where French is also currently spoken.

One African charset is usable for Bambara, Ewondo and Fulfude, as well as for French. This charset is available in Recode under the name AFRFUL-102-BPI_OCIL. Accepted aliases are bambara, bra, ewondo and fulfude. Transliterated forms of the same are available under the name AFRFUL-103-BPI_OCIL. Accepted aliases are t-bambara, t-bra, t-ewondo and t-fulfude.

Another African charset is usable for Lingala, Sango and Wolof, as well as for French. This charset is available in Recode under the name AFRLIN-104-BPI_OCIL. Accepted aliases are lingala, lin, sango and wolof. Transliterated forms of the same are available under the name AFRLIN-105-BPI_OCIL. Accepted aliases are t-lingala, t-lin, t-sango and t-wolof.

To ease exchange with ISO-8859-1, there is a charset conveying transliterated forms for Latin-1 in a way which is compatible with the other African charsets in this series. This charset is available in Recode under the name AFRL1-101-BPI_OCIL. Accepted aliases are t-fra and t-francais.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.8 Cyrillic and other charsets

The following Cyrillic charsets are already available in Recode through RFC 1345 tables: CP1251 with aliases 1251, ms-cyrl and windows-1251; CSN_369103 with aliases ISO-IR-139 and KOI8_L2; ECMA-cyrillic with aliases ECMA-113, ECMA-113:1986 and iso-ir-111, IBM880 with aliases 880, CP880 and EBCDIC-Cyrillic; INIS-cyrillic with alias iso-ir-51; ISO-8859-5 with aliases cyrillic, ISO-8859-5:1988 and iso-ir-144; KOI-7; KOI-8 with alias GOST_19768-74; KOI8-R; KOI8-RU and finally KOI8-U.

There seems to remain some confusion in Roman charsets for Cyrillic languages, and because a few users requested it repeatedly, Recode now offers special services in that area. Consider these charsets as experimental and debatable, as the extraneous tables describing them are still a bit fuzzy or non-standard. Hopefully, in the long run, these charsets will be covered in Keld Simonsen’s works to the satisfaction of everybody, and this section will merely disappear.


This charset is available under the name KEYBCS2, with Kamenicky as an accepted alias.


This charset is available under the name CORK, with T1 as an accepted alias.


This charset is available under the name KOI-8_CS2.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.9 Java code

This charset is available under the name Java, and should be considered experimental for now.

ASCII characters represent themselves. Character outside ASCII are coded as ‘\uNNNN’, where ‘NNNN’ stands for the four-digit hexadecimal value of the character within Unicode. The canonical representation uses lower case for the ‘u’ prefix and for the hexadecimal digits, yet Recode also accepts upper case.

There is currently no attempt to distinguish Java comments from Java strings while the recoding goes, and this may be corrected some day.

Next: , Previous: , Up: Miscellaneous   [Contents][Index]

12.10 Easy French conventions

This charset is available in Recode under the name Texte and has txte for an alias. It is a seven bits code, identical to ASCII-BS, save for French diacritics which are noted using a slightly different convention.

At text entry time, these conventions provide a little speed up. At read time, they slightly improve the readability over a few alternate ways of coding diacritics. Of course, it would better to have a specialised keyboard to make direct eight bits entries and fonts for immediately displaying eight bit ISO Latin-1 characters. But not everybody is so fortunate. In a few mailing environments, and sadly enough, it still happens that the eight bit is often willing-fully destroyed.

Easy French has been in use in France for a while. I only slightly adapted it (the diaeresis option) to make it more comfortable to several usages in Québec originating from Université de Montréal. In fact, the main problem for me was not to necessarily to invent Easy French, but to recognise the “best” convention to use, (best is not being defined, here) and to try to solve the main pitfalls associated with the selected convention. Shortly said, we have:


for e (and some other vowels) with an acute accent,


for e (and some other vowels) with a grave accent,


for e (and some other vowels) with a circumflex accent,


for e (and some other vowels) with a diaeresis,


for c with a cedilla.

There is no attempt at expressing the ae and oe diphthongs. French also uses tildes over n and a, but seldomly, and this is not represented either. In some countries, : is used instead of " to mark diaeresis. Recode supports only one convention per call, depending on the ‘-c’ option of the recode command. French quotes (sometimes called “angle quotes”) are noted the same way English quotes are noted in TeX, id est by `` and ''. No effort has been put to preserve Latin ligatures (æ, œ) which are representable in several other charsets. So, these ligatures may be lost through Easy French conventions.

The convention is prone to losing information, because the diacritic meaning overloads some characters that already have other uses. To alleviate this, some knowledge of the French language is boosted into the recognition routines. So, the following subtleties are systematically obeyed by the various recognisers.

  1. A comma which follows a c is interpreted as a cedilla only if it is followed by one of the vowels a, o or u.
  2. A single quote which follows a e does not necessarily means an acute accent if it is followed by a single other one. For example:

    will give an e with an acute accent.


    will give a simple e, with a closing quotation mark.


    will give an e with an acute accent, followed by a closing quotation mark.

    There is a problem induced by this convention if there are English quotations with a French text. In sentences like:

    There's a meeting at Archie's restaurant.

    the single quotes will be mistaken twice for acute accents. So English contractions and suffix possessives could be mangled.

  3. A double quote or colon, depending on ‘-c’ option, which follows a vowel is interpreted as diaeresis only if it is followed by another letter. But there are in French several words that end with a diaeresis, and the Recode library is aware of them. There are words ending in “igue”, either feminine words without a relative masculine (besaiguë and ciguë), or feminine words with a relative masculine15 (aiguë, ambiguë, contiguë, exiguë, subaiguë and suraiguë). There are also words not ending in “igue”, but instead, either ending by “i”16 ending by “e” (canoë) or ending by “u”17 (Esaü).

    Just to complete this topic, note that it would be wrong to make a rule for all words ending in “igue” as needing a diaerisis, as there are counter-examples (becfigue, bèsigue, bigue, bordigue, bourdigue, brigue, contre-digue, digue, d’intrigue, fatigue, figue, garrigue, gigue, igue, intrigue, ligue, prodigue, sarigue and zigue).

Previous: , Up: Miscellaneous   [Contents][Index]

12.11 Mule as a multiplexed charset

This version of Recode barely starts supporting multiplexed or super-charsets, that is, those encoding methods by which a single text stream may contain a combination of more than one constituent charset. The only multiplexed charset in Recode is Mule, and even then, it is only very partially implemented: the only correspondence available is with Latin-1. The author quickly implemented this only because he needed this for himself. However, it is intended that Mule support to become more real in subsequent releases of Recode.

Multiplexed charsets are not to be confused with mixed charset texts (see Mixed). For mixed charset input, the rules allowing to distinguish which charset is current, at any given place, are kind of informal, and driven from the semantics of what the file contains. On the other side, multiplexed charsets are designed to be interpreted fairly precisely, and quite independently of any informational context.

The spelling Mule originally stood for multilingual enhancement to GNU Emacs, it is the result of a collective effort orchestrated by Handa Ken’ichi since 1993. When Mule got rewritten in the main development stream of GNU Emacs 20, the FSF renamed it MULE, meaning multilingual environment in GNU Emacs. Even if the charset Mule is meant to stay internal to GNU Emacs, it sometimes breaks loose in external files, and as a consequence, a recoding tool is sometimes needed. Within Emacs, Mule comes with Leim, which stands for libraries of emacs input methods. One of these libraries is named quail18.

Next: , Previous: , Up: Top   [Contents][Index]

13 All about surfaces

A surface is the varnish added over a charset so it fits in actual bits and bytes. How end of lines are exactly encoded is not really pertinent to the charset, and so, there is surface for end of lines. Base64 is also a surface, as we may encode any charset in it. Other examples would DES enciphering, or gzip compression (even if Recode does not offer them currently): these are ways to give a real life to theoretical charsets.

The trivial surface consists of using a fixed number of bits (often eight) for each character, the bits together hold the integer value of the index for the character in its charset table. There are many kinds of surfaces, beyond the trivial one, all having the purpose of increasing selected qualities for the storage or transmission. For example, surfaces might increase the resistance to channel limits (Base64), the transmission speed (gzip), the information privacy (DES), the conformance to operating system conventions (CR-LF), the blocking into records (VB), and surely other things as well19. Many surfaces may be applied to a stream of characters from a charset, the order of application of surfaces is important, and surfaces should be removed in the reverse order of their application.

Even if surfaces may generally be applied to various charsets, some surfaces were specifically designed for a particular charset, and would not make much sense if applied to other charsets. In such cases, these conceptual surfaces have been implemented as Recode charsets, instead of as surfaces. This choice yields to cleaner syntax and usage. See Universal.

Surfaces are implemented within Recode as special charsets which may only transform to or from the data special charset. Clever users may use this knowledge for writing surface names in requests exactly as if they were pure charsets, when the only need is to change surfaces without any kind of recoding between real charsets. In such contexts data may also be used as if it were some kind of generic, anonymous charset: the request ‘data..surface’ merely adds the given surface, while the request ‘surface..data’ removes it.

This chapter presents all surfaces currently available.

Next: , Previous: , Up: Surfaces   [Contents][Index]

13.1 Permuting groups of bytes

A permutation is a surface transformation which reorders groups of eight-bit bytes. A 21 permutation exchanges pairs of successive bytes. If the text contains an odd number of bytes, the last byte is merely copied. An 4321 permutation inverts the order of quadruples of bytes. If the text does not contains a multiple of four bytes, the remaining bytes are nevertheless permuted as 321 if there are three bytes, 21 if there are two bytes, or merely copied otherwise.


This surface is available in Recode under the name 21-Permutation and has swabytes for an alias.


This surface is available in Recode under the name 4321-Permutation.

Next: , Previous: , Up: Surfaces   [Contents][Index]

13.2 Representation for end of lines

The same charset might slightly differ, from one system to another, for the single fact that end of lines are not represented identically on all systems. The representation for an end of line within Recode is the ASCII or UCS code with value 10, or LF. Other conventions for representing end of lines are available through surfaces.


This convention is popular on Apple’s Macintosh machines. When this surface is applied, each line is terminated by CR, which has ASCII value 13. Unless the library is operating in strict mode, adding or removing the surface will in fact exchange CR and LF, for better reversibility. However, in strict mode, the exchange does not happen, any CR will be copied verbatim while applying the surface, and any LF will be copied verbatim while removing it.

This surface is available in Recode under the name CR, it does not have any aliases. This is the implied surface for the Apple Macintosh related charsets.


This convention is popular on Microsoft systems running on IBM PCs and compatible. When this surface is applied, each line is terminated by a sequence of two characters: one CR followed by one LF, in that order.

For compatibility with oldish MS-DOS systems, removing a CR-LF surface will discard the first encountered C-z, which has ASCII value 26, and everything following it in the text. Adding this surface will not, however, append a C-z to the result.

This surface is available in Recode under the name CR-LF and has cl for an alias. This is the implied surface for the IBM or Microsoft related charsets or code pages.

Some other charsets might have their own representation for an end of line, which is different from LF. For example, this is the case of various EBCDIC charsets, or Icon-QNX. The recoding of end of lines is intimately tied into such charsets, it is not available separately as surfaces.

Next: , Previous: , Up: Surfaces   [Contents][Index]

13.3 MIME contents encodings

RFC 2045 defines two 7-bit surfaces, meant to prepare 8-bit messages for transmission. Base64 is especially usable for binary entities, while Quoted-Printable is especially usable for text entities, in those case the lower 128 characters of the underlying charset coincide with ASCII.


This surface is available in Recode under the name Base64, with b64 and 64 as acceptable aliases.


This surface is available in Recode under the name Quoted-Printable, with quote-printable and QP as acceptable aliases.

Note that UTF-7, which may be also considered as a MIME surface, is provided as a genuine charset instead, as it necessary relates to UCS-2 and nothing else. See UTF-7.

A little historical note, also showing the three levels of acceptance of Internet standards. MIME changed from a “Proposed Standard” (RFC 1341–1344, 1992) to a “Draft Standard” (RFC 1521–1523) in 1993, and was recycled as a “Draft Standard” in 1996-11. It is not yet a “Full Standard”.

Next: , Previous: , Up: Surfaces   [Contents][Index]

13.4 Interpreted character dumps

Dumps are surfaces meant to express, in ways which are a bit more readable, the bit patterns used to represent characters. They allow the inspection or debugging of character streams, but also, they may assist a bit the production of C source code which, once compiled, would hold in memory a copy of the original coding. However, Recode does not attempt, in any way, to produce complete C source files in dumps. User hand editing or Makefile trickery is still needed for adding missing lines. Dumps may be given in decimal, hexadecimal and octal, and be based over chunks of either one, two or four eight-bit bytes. Formatting has been chosen to respect the C language syntax for number constants, with commas and newlines inserted appropriately.

However, when dumping two or four byte chunks, the last chunk may be incomplete. This is observable through the usage of narrower expression for that last chunk only. Such a shorter chunk would not be compiled properly within a C initialiser, as all members of an array share a single type, and so, have identical sizes.


This surface corresponds to an octal expression of each input byte.

It is available in Recode under the name Octal-1, with o1 and o as acceptable aliases.


This surface corresponds to an octal expression of each pair of input bytes, except for the last pair, which may be short.

It is available in Recode under the name Octal-2 and has o2 for an alias.


This surface corresponds to an octal expression of each quadruple of input bytes, except for the last quadruple, which may be short.

It is available in Recode under the name Octal-4 and has o4 for an alias.


This surface corresponds to an decimal expression of each input byte.

It is available in Recode under the name Decimal-1, with d1 and d as acceptable aliases.


This surface corresponds to an decimal expression of each pair of input bytes, except for the last pair, which may be short.

It is available in Recode under the name Decimal-2 and has d2 for an alias.


This surface corresponds to an decimal expression of each quadruple of input bytes, except for the last quadruple, which may be short.

It is available in Recode under the name Decimal-4 and has d4 for an alias.


This surface corresponds to an hexadecimal expression of each input byte.

It is available in Recode under the name Hexadecimal-1, with x1 and x as acceptable aliases.


This surface corresponds to an hexadecimal expression of each pair of input bytes, except for the last pair, which may be short.

It is available in Recode under the name Hexadecimal-2, with x2 for an alias.


This surface corresponds to an hexadecimal expression of each quadruple of input bytes, except for the last quadruple, which may be short.

It is available in Recode under the name Hexadecimal-4, with x4 for an alias.

When removing a dump surface, that is, when reading a dump results back into a sequence of bytes, the narrower expression for a short last chunk is recognised, so dumping is a fully reversible operation. However, in case you want to produce dumps by other means than through Recode, beware that for decimal dumps, the library has to rely on the number of spaces to establish the original byte size of the chunk.

Although the library might report reversibility errors, removing a dump surface is a rather forgiving process: one may mix bases, group a variable number of data per source line, or use shorter chunks in places other than at the far end. Also, source lines not beginning with a number are skipped. So, Recode should often be able to read a whole C header file, wrapping the results of a previous dump, and regenerate the original byte string.

Previous: , Up: Surfaces   [Contents][Index]

13.5 Artificial data for testing

A few pseudo-surfaces exist to generate debugging data out of thin air. These surfaces are only meant for the expert Recode user, and are only useful in a few contexts, like for generating binary permutations from the recoding or acting on them.

Debugging surfaces, when removed, insert their generated data at the beginning of the output stream, and copy all the input stream after the generated data, unchanged. This strange removal constraint comes from the fact that debugging surfaces are usually specified in the before position instead of the after position within a request. With debugging surfaces, one often recodes file /dev/null in filter mode. Specifying many debugging surfaces at once has an accumulation effect on the output, and since surfaces are removed from right to left, each generating its data at the beginning of previous output, the net effect is an impression that debugging surfaces are generated from left to right, each appending to the result of the previous. In any case, any real input data gets appended after what was generated.


When removed, this surface produces 128 single bytes, the first having value 0, the second having value 1, and so forth until all 128 values have been generated.


When removed, this surface produces 256 single bytes, the first having value 0, the second having value 1, and so forth until all 256 values have been generated.


When removed, this surface produces 64509 double bytes, the first having value 0, the second having value 1, and so forth until all values have been generated, but excluding risky UCS-2 values, like all codes from the surrogate UCS-2 area (for UTF-16), the byte order mark, and values known as invalid UCS-2.


When removed, this surface produces 65536 double bytes, the first having value 0, the second having value 1, and so forth until all 65536 values have been generated.

As an example, the command ‘recode l5/test8..dump < /dev/null’ is a convoluted way to produce an output similar to ‘recode -lf l5’. It says to generate all possible 256 bytes and interpret them as ISO-8859-9 codes, while converting them to UCS-2. Resulting UCS-2 characters are dumped one per line, accompanied with their explicative name.

Next: , Previous: , Up: Top   [Contents][Index]

14 Internal aspects

The incoming explanations of the internals of Recode should help people who want to dive into Recode sources for adding new charsets. Adding new charsets does not require much knowledge about the overall organisation of Recode. You can rather concentrate of your new charset, letting the remainder of the Recode mechanics take care of interconnecting it with all others charsets.

If you intend to play seriously at modifying Recode, beware that you may need some other GNU tools which were not required when you first installed Recode. If you modify or create any .l file, then you need Flex, and some better awk like mawk, GNU awk, or nawk. If you modify the documentation (and you should!), you need makeinfo. If you are really audacious, you may also want Perl for modifying tabular processing, then m4, Autoconf, Automake and libtool for adjusting configuration matters.

Next: , Previous: , Up: Internals   [Contents][Index]

14.1 Overall organisation

The Recode mechanics slowly evolved for many years, and it would be tedious to explain all problems I met and mistakes I did all along, yielding the current behaviour. Surely, one of the key choices was to stop trying to do all conversions in memory, one line or one buffer at a time. It has been fruitful to use the character stream paradigm, and the elementary recoding steps now convert a whole stream to another. Most of the control complexity in Recode exists so that each elementary recoding step stays simple, making easier to add new ones. The whole point of Recode, as I see it, is providing a comfortable nest for growing new charset conversions.

The main Recode driver constructs, while initialising all conversion modules, a table giving all the conversion routines available (single steps) and for each, the starting charset and the ending charset. If we consider these charsets as being the nodes of a directed graph, each single step may be considered as oriented arc from one node to the other. A cost is attributed to each arc: for example, a high penalty is given to single steps which are prone to losing characters, a lower penalty is given to those which need studying more than one input character for producing an output character, etc.

Given a starting code and a goal code, Recode computes the most economical route through the elementary recodings, that is, the best sequence of conversions that will transform the input charset into the final charset. To speed up execution, Recode looks for subsequences of conversions which are simple enough to be merged, and then dynamically creates new single steps to represent these mergings.

A double step in Recode is a special concept representing a sequence of two single steps, the output of the first single step being the special charset UCS-2, the input of the second single step being also UCS-2. Special Recode machinery dynamically produces efficient, reversible, merge-able single steps out of these double steps.

I made some statistics about how many internal recoding steps are required between any two charsets chosen at random. The initial recoding layout, before optimisation, always uses between 1 and 5 steps. Optimisation could sometimes produce mere copies, which are counted as no steps at all. In other cases, optimisation is unable to save any step. The number of steps after optimisation is currently between 0 and 5 steps. Of course, the expected number of steps is affected by optimisation: it drops from 2.8 to 1.8. This means that Recode uses a theoretical average of a bit less than one step per recoding job. This looks good. This was computed using reversible recodings. In strict mode, optimisation might be defeated somewhat. Number of steps run between 1 and 6, both before and after optimisation, and the expected number of steps decreases by a lesser amount, going from 2.2 to 1.3. This is still manageable.

Next: , Previous: , Up: Internals   [Contents][Index]

14.2 Adding new charsets

The main part of Recode is written in C, as are most single steps. A few single steps need to recognise sequences of multiple characters, they are often better written in Flex. It is easy for a programmer to add a new charset to Recode. All it requires is making a few functions kept in a single .c file, adjusting Makefile.am and remaking Recode.

One of the function should convert from any previous charset to the new one. Any previous charset will do, but try to select it so you will not lose too much information while converting. The other function should convert from the new charset to any older one. You do not have to select the same old charset than what you selected for the previous routine. Once again, select any charset for which you will not lose too much information while converting.

If, for any of these two functions, you have to read multiple bytes of the old charset before recognising the character to produce, you might prefer programming it in Flex in a separate .l file. Prototype your C or Flex files after one of those which exist already, so to keep the sources uniform. Besides, at make time, all .l files are automatically merged into a single big one by the script mergelex.awk.

There are a few hidden rules about how to write new Recode modules, for allowing the automatic creation of decsteps.h and initsteps.h at make time, or the proper merging of all Flex files. Mimetism is a simple approach which relieves me of explaining all these rules! Start with a module closely resembling what you intend to do. Here is some advice for picking up a model. First decide if your new charset module is to be be driven by algorithms rather than by tables. For algorithmic recodings, see iconqnx.c for C code, or txtelat1.l for Flex code. For table driven recodings, see ebcdic.c for one-to-one style recodings, lat1html.c for one-to-many style recodings, or atarist.c for double-step style recodings. Just select an example from the style that better fits your application.

Each of your source files should have its own initialisation function, named module_charset, which is meant to be executed quickly once, prior to any recoding. It should declare the name of your charsets and the single steps (or elementary recodings) you provide, by calling declare_step one or more times. Besides the charset names, declare_step expects a description of the recoding quality (see recodext.h) and two functions you also provide.

The first such function has the purpose of allocating structures, pre-conditioning conversion tables, etc. It is also the way of further modifying the STEP structure. This function is executed if and only if the single step is retained in an actual recoding sequence. If you do not need such delayed initialisation, merely use NULL for the function argument.

The second function executes the elementary recoding on a whole file.

If you have a recoding table handy in a suitable format but do not use one of the predefined recoding functions, it is still a good idea to use a delayed initialisation to save it anyway, because recode option ‘-h’ will take advantage of this information when available.

Finally, edit Makefile.am to add the source file name of your routines to the C_STEPS or L_STEPS macro definition, depending on whether your routines are written in C or Flex.

Next: , Previous: , Up: Internals   [Contents][Index]

14.3 Adding new surfaces

Adding a new surface is technically quite similar to adding a new charset. See New charsets. A surface is provided as a set of two transformations: one from the predefined special charset data to the new surface, meant to apply the surface, the other from the new surface to the predefined special charset data, meant to remove the surface.

Internally in Recode, function declare_step especially recognises when a charset is so related to data, and then takes appropriate actions so that charset gets indeed installed as a surface.

Previous: , Up: Internals   [Contents][Index]

14.4 Comments on the library design

Next: , Previous: , Up: Top   [Contents][Index]

Concept Index

Jump to:   A   B   C   D   E   F   G   H   I   L   M   N   O   P   Q   R   S   T   U   V   W   X  
Index Entry  Section

abbreviated names for charsets and surfaces: Requests
adding new charsets: New charsets
adding new surfaces: New surfaces
African charsets: African
aliases: Requests
alternate names for charsets and surfaces: Requests
ambiguous output, error message: Errors
ASCII table, recreating with Recode: ASCII
average number of recoding steps: Main flow

Bibliographic charsets: ISO 5426 and ANSEL
BibTeX files: BibTeX
box-drawing characters: Recoding
bug reports, where to send: Contributing
byte order mark: UCS-2
byte order swapping: Permutations

caret ASCII code: CDC-NOS
CDC charsets: CDC
CDC Display Code, a table: Display Code
chaining of charsets in a request: Requests
character entities: HTML
character entity references: HTML
character mnemonics, documentation: Tabular
character streams, description: dump-with-names
charset level functions: Charset level
charset names, valid characters: Requests
charset, default: Requests
charset, pure: Surface overview
charset, what it is: Introduction
charsets for CDC machines: CDC
charsets, aliases: Requests
charsets, chaining in a request: Requests
charsets, guessing: Listings
charsets, overview: Charset overview
chset tools: Tabular
codepages: IBM and MS
combining characters: UCS-2
commutativity of surfaces: Requests
contributing charsets: Contributing
conversions, unavailable: Charset overview
convert a subset of characters: Mixed
convert strings and comments: Mixed
copyright conditions, printing: Listings
counting characters: count-characters
CR-LF surface, in IBM-PC charsets: IBM-PC
Ctrl-Z, discarding: End lines
Cyrillic charsets: Others

debugging surfaces: Test
default charset: Requests
description of individual characters: dump-with-names
details about recoding: Recoding
deviations from RFC 1345: Tabular
diacritics and underlines, removing: flat
diacritics, with ASCII-BS charset: ASCII-BS
diaeresis: Recoding
disable map filling: Reversibility
double step: Main flow
dumping characters: Dump
dumping characters, with description: dump-with-names

Easy French: Texte
end of line format: End lines
endiannes, changing: Permutations
entities: HTML
error handling: Errors
error level threshold: Errors
error messages: Errors
error messages, suppressing: Reversibility
exceptions to available conversions: Charset overview
exit status: Synopsis

file sequencing: Sequencing
file time stamps: Recoding
filter operation: Synopsis
force recoding: Reversibility
French description of charsets: Listings

guessing charsets: Listings

Haible, Bruno: iconv
handling errors: Errors
help page, printing: Listings
HTML normalization: HTML

IBM codepages: IBM and MS
IBM graphics characters: Recoding
iconv: Design
iconv library: iconv
identifying subsets in charsets: Listings
ignore charsets: Recoding
implied surfaces: Requests
impossible conversions: Charset overview
information about charsets: Listings
initialisation functions, outer: Outer level
initialisation functions, request: Request level
initialisation functions, task: Task level
interface, with iconv library: iconv
intermediate charsets: Requests
internal functions: Charset level
internal recoding bug, error message: Errors
internals: Internals
invalid input, error message: Errors
invocation of recode, synopsis: Synopsis
irreversible recoding: Reversibility
ISO 10646: Universal

languages, programming: Listings
LaTeX files: LaTeX
Latin charsets: ISO 8859
Latin-1 table, recreating with Recode: ISO 8859
leaks, memory: Outer level
letter case, in charset and surface names: Requests
libiconv: iconv
library, iconv: iconv
listing charsets: Listings

map filling: Reversibility
map filling, disable: Reversibility
markup language: HTML
memory leaks: Outer level
memory sequencing: Sequencing
MIME encodings: MIME
misuse of recoding library, error message: Errors
MS-DOS charsets: IBM-PC
MULE, in Emacs: Mule
multiplexed charsets: Mule

names of charsets and surfaces, abbreviation: Requests
new charsets, how to add: New charsets
new surfaces, how to add: New surfaces
non canonical input, error message: Errors
normilise an HTML file: HTML
NOS 6/12 code: CDC-NOS
numeric character references: HTML

outer level functions: Outer level

partial conversion: Mixed
permutations of groups of bytes: Permutations
pipe sequencing: Sequencing
programming language support: Listings
program_name variable: Library
program_name variable: Outer level
pseudo-charsets: Charset overview
pure charset: Surface overview

quality of recoding: Recoding

Recode internals: Internals
Recode request syntax: Requests
Recode use, a tutorial: Tutorial
Recode version, printing: Listings
Recode, and RFC 1345: Tabular
Recode, main flow of operation: Main flow
recode, operation as filter: Synopsis
recode, synopsis of invocation: Synopsis
recode.h header: Library
recodext.h header: Task level
recoding details: Recoding
recoding library: Library
recoding path, rejection: Recoding
recoding steps, statistics: Main flow
removing diacritics and underlines: flat
reporting bugs: Contributing
request level functions: Request level
request, syntax: Requests
reversibility of recoding: Reversibility
RFC 1345: Tabular
RFC 2045: MIME

sequencing: Sequencing
shared library implementation: Design
silent operation: Reversibility
single step: Main flow
source file generation: Listings
speed considerations: Outer level
speed considerations: Request level
status code: Synopsis
strict operation: Reversibility
string and comments conversion: Mixed
subsets in charsets: Listings
super-charsets: Mule
supported programming languages: Listings
suppressing diagnostic messages: Reversibility
surface, what it is: Introduction
surface, what it is: Surfaces
surfaces, aliases: Requests
surfaces, commutativity: Requests
surfaces, implementation in Recode: Surfaces
surfaces, implied: Requests
surfaces, overview: Surface overview
surfaces, syntax: Requests
system detected problem, error message: Errors

task execution: Task level
task level functions: Task level
TeX files: LaTeX
TeX files: BibTeX
Texinfo files: Texinfo
threshold for error reporting: Errors
time stamps of files: Recoding
trivial surface: Surfaces
tutorial: Tutorial

unavailable conversions: Charset overview
Unicode: UCS-2
unknown charsets: Listings
unreachable charsets: Charset overview
untranslatable input, error message: Errors

valid characters in charset names: Requests
verbose operation: Recoding
Vietnamese charsets: Vietnamese

World Wide Web: HTML


Jump to:   A   B   C   D   E   F   G   H   I   L   M   N   O   P   Q   R   S   T   U   V   W   X  

Next: , Previous: , Up: Top   [Contents][Index]

Option Index

This is an alphabetical list of all command-line options accepted by recode.

Jump to:   -
Index Entry  Section

--colons: Recoding
--copyright: Listings
--diacritics: Mixed
--find-subsets: Listings
--force: Reversibility
--graphics: Recoding
--header: Listings
--help: Listings
--ignore: Recoding
--known=: Listings
--list: Listings
--quiet: Reversibility
--sequence: Sequencing
--silent: Reversibility
--source: Mixed
--strict: Reversibility
--touch: Recoding
--verbose: Recoding
--version: Listings
-C: Listings
-c: Recoding
-d: Mixed
-f: Reversibility
-g: Recoding
-h: Listings
-i: Sequencing
-k: Listings
-l: Listings
-p: Sequencing
-q: Reversibility
-s: Reversibility
-S: Mixed
-T: Listings
-t: Recoding
-v: Recoding
-x: Recoding

Jump to:   -

Next: , Previous: , Up: Top   [Contents][Index]

Library Index

This is an alphabetical index of important functions, data structures, and variables in the Recode library.

Jump to:   A   B   D   E   F   L   M   R   T   V  
Index Entry  Section

abort_level: Task level
ascii_graphics: Request level

byte_order_mark: Task level

declare_step: New surfaces
diacritics_only: Request level
diaeresis_char: Request level

error_so_far: Task level

fail_level: Task level
find_charset: Charset level

LC_MESSAGES, when listing charsets: Listings
list_all_charsets: Charset level
list_concise_charset: Charset level
list_full_charset: Charset level

make_header_flag: Request level

recode_buffer_to_buffer: Request level
recode_buffer_to_file: Request level
recode_delete_outer: Outer level
recode_delete_request: Request level
recode_delete_task: Task level
recode_file_to_buffer: Request level
recode_file_to_file: Request level
recode_filter_close: Task level
recode_filter_close, not available: Request level
recode_filter_open: Task level
recode_filter_open, not available: Request level
recode_format_table: Request level
recode_new_outer: Outer level
recode_new_request: Request level
recode_new_task: Task level
RECODE_OUTER structure: Outer level
recode_perform_task: Task level
RECODE_REQUEST structure: Request level
recode_request structure: Request level
recode_scan_request: Request level
recode_string: Request level
recode_string_to_buffer: Request level
recode_string_to_file: Request level
RECODE_TASK structure: Task level

task_request structure: Task level

verbose_flag: Request level

Jump to:   A   B   D   E   F   L   M   R   T   V  

Previous: , Up: Top   [Contents][Index]

Charset and Surface Index

This is an alphabetical list of all the charsets and surfaces supported by Recode, and their aliases.

Jump to:   0   1   2   3   4   5   6   8   9  
A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y  
Index Entry  Section

037: Tabular
038: Tabular

1004: Tabular
1026: Tabular
1047: Tabular
10646: UCS-4
1129, not available: Vietnamese
1250: Tabular
1251: Tabular
1252: Tabular
1253: Tabular
1254: Tabular
1255: Tabular
1256: Tabular
1257: Tabular
1258, not available: Vietnamese
1345: Tabular
1866: HTML

2070: HTML
21-Permutation: Permutations
256: Tabular
273: Tabular
274: Tabular
275: Tabular
278: Tabular
280: Tabular
281: Tabular
284: Tabular
285: Tabular
290: Tabular
297: Tabular

367: Tabular

420: Tabular
423: Tabular
424: Tabular
4321-Permutation: Permutations
437: Tabular

500: Tabular
500V1: Tabular

64: MIME

819: Tabular
850: Tabular
851: Tabular
852: Tabular
855: Tabular
857: Tabular
860: Tabular
861: Tabular
862: Tabular
863: Tabular
864: Tabular
865: Tabular
868: Tabular
869: Tabular
870: Tabular
871: Tabular
875: Tabular
880: Tabular
891: Tabular

903: Tabular
904: Tabular
905: Tabular
912: Tabular
918: Tabular

AFRFUL-102-BPI_OCIL, and aliases: African
AFRFUL-103-BPI_OCIL, and aliases: African
AFRL1-101-BPI_OCIL: African
AFRLIN-104-BPI_OCIL: African
AFRLIN-105-BPI_OCIL: African
ANSEL, a charset: ISO 5426 and ANSEL
ANSI_X3.110-1983, not recognised by recode: Tabular
ANSI_X3.4-1968, aliases and source: Tabular
ANSI_X3.4-1968, and its aliases: ASCII
ANSI_X3.4-1986: Tabular
arabic: Tabular
arabic7: Tabular
ASCII: Requests
ASCII: Tabular
ASCII, an alias for the ANSI_X3.4-1968 charset: ASCII
ASCII-BS, and its aliases: ASCII-BS
ASMO-708: Tabular
ASMO_449, aliases and source: Tabular
AtariST: AtariST

b64: MIME
baltic, aliases and source: Tabular
bambara: African
Bang-Bang: Bang-Bang
Base64: MIME
BibTeX, a charset: BibTeX
bra: African
BS, an alias for ASCII-BS charset: ASCII-BS
BS_4730, aliases and source: Tabular
BS_viewdata, aliases and source: Tabular
btex: BibTeX

ca: Tabular
CDC-NOS, and its aliases: CDC-NOS
CHAR: Requests
cl: End lines
cn: Tabular
combined-UCS-2: UCS-2
CORK: Others
count-characters: count-characters
count-characters, not as before charset: Charset overview
cp-ar: Tabular
cp-gr: Tabular
cp-hu: Tabular
cp-is: Tabular
CP037: Tabular
CP038: Tabular
CP1004: Tabular
CP1026: Tabular
CP1047: Tabular
CP1129, not available: Vietnamese
CP1250, aliases and source: Tabular
CP1251, aliases and source: Tabular
CP1252, aliases and source: Tabular
CP1253, aliases and source: Tabular
CP1254, aliases and source: Tabular
CP1255, aliases and source: Tabular
CP1256, aliases and source: Tabular
CP1257, aliases and source: Tabular
CP1258, not available: Vietnamese
CP256: Tabular
CP273: Tabular
CP274: Tabular
CP275: Tabular
CP278: Tabular
CP280: Tabular
CP281: Tabular
CP284: Tabular
CP285: Tabular
CP290: Tabular
CP297: Tabular
CP367: Tabular
cp367: ASCII
CP420: Tabular
CP423: Tabular
CP424: Tabular
CP437: Tabular
CP500: Tabular
CP819: Tabular
CP850: Tabular
CP851: Tabular
CP852: Tabular
CP855: Tabular
CP857: Tabular
CP860: Tabular
CP861: Tabular
CP862: Tabular
CP863: Tabular
CP864: Tabular
CP865: Tabular
CP868: Tabular
CP869: Tabular
CP870: Tabular
CP871: Tabular
CP875: Tabular
CP880: Tabular
CP891: Tabular
CP903: Tabular
CP904: Tabular
CP905: Tabular
CP912: Tabular
CP918: Tabular
CR, a surface: End lines
CR-LF, a surface: End lines
csa7-1: Tabular
csa7-2: Tabular
CSA_Z243.4-1985-1, aliases and source: Tabular
CSA_Z243.4-1985-2, aliases and source: Tabular
CSA_Z243.4-1985-gr, aliases and source: Tabular
CSN_369103, aliases and source: Tabular
cuba: Tabular
CWI, aliases and source: Tabular
CWI-2: Tabular
cyrillic: Tabular

d1: Dump
d2: Dump
d4: Dump
data, a special charset: Surfaces
data, not with charsets: Charset overview
de: Tabular
dec: Tabular
DEC-MCS, aliases and source: Tabular
Decimal-1: Dump
Decimal-2: Dump
Decimal-4: Dump
DIN_66003, aliases and source: Tabular
dk: Tabular
dk-us, not recognised by recode: Tabular
dos: IBM-PC
DS2089: Tabular
DS_2089, aliases and source: Tabular
dump-with-names: dump-with-names
dump-with-names, not as before charset: Charset overview

e13b: Tabular
EBCDIC, a charset: EBCDIC
EBCDIC-AT-DE, aliases and source: Tabular
EBCDIC-AT-DE-A, aliases and source: Tabular
EBCDIC-BE: Tabular
EBCDIC-BR: Tabular
EBCDIC-CA-FR, aliases and source: Tabular
ebcdic-cp-ar1: Tabular
ebcdic-cp-ar2: Tabular
ebcdic-cp-be: Tabular
ebcdic-cp-ca: Tabular
ebcdic-cp-ch: Tabular
ebcdic-cp-es: Tabular
ebcdic-cp-fi: Tabular
ebcdic-cp-fr: Tabular
ebcdic-cp-gb: Tabular
ebcdic-cp-gr: Tabular
ebcdic-cp-he: Tabular
ebcdic-cp-is: Tabular
ebcdic-cp-it: Tabular
ebcdic-cp-nl: Tabular
ebcdic-cp-roece: Tabular
ebcdic-cp-se: Tabular
ebcdic-cp-tr: Tabular
ebcdic-cp-us: Tabular
ebcdic-cp-wt: Tabular
ebcdic-cp-yu: Tabular
EBCDIC-Cyrillic: Tabular
EBCDIC-DK-NO, aliases and source: Tabular
EBCDIC-DK-NO-A, aliases and source: Tabular
EBCDIC-ES, aliases and source: Tabular
EBCDIC-ES-A, aliases and source: Tabular
EBCDIC-ES-S, aliases and source: Tabular
EBCDIC-FI-SE, aliases and source: Tabular
EBCDIC-FI-SE-A, aliases and source: Tabular
EBCDIC-FR, aliases and source: Tabular
EBCDIC-Greek: Tabular
EBCDIC-INT1: Tabular
EBCDIC-IS-FRISS, aliases and source: Tabular
EBCDIC-IT, aliases and source: Tabular
EBCDIC-JP-E: Tabular
EBCDIC-JP-kana: Tabular
EBCDIC-PT, aliases and source: Tabular
EBCDIC-UK, aliases and source: Tabular
EBCDIC-US, aliases and source: Tabular
ECMA-113: Tabular
ECMA-113(1986): Tabular
ECMA-114: Tabular
ECMA-118: Tabular
ECMA-cyrillic, aliases and source: Tabular
ELOT_928: Tabular
ES, aliases and source: Tabular
ES2, aliases and source: Tabular
ewondo: African

FI: Tabular
flat, a charset: flat
flat, not as before charset: Charset overview
fr: Tabular
friss: Tabular
fulfude: African

gb: Tabular
GB_1988-80, aliases and source: Tabular
GB_2312-80, not recognised by recode: Tabular
GOST_19768-74: Tabular
GOST_19768-87, aliases and source: Tabular
greek: Tabular
greek-ccitt, aliases and source: Tabular
greek7, aliases and source: Tabular
greek7-old, aliases and source: Tabular
greek8: Tabular

h0: HTML
h1: HTML
h2: HTML
h3: HTML
h4: HTML
hebrew: Tabular
Hexadecimal-1: Dump
Hexadecimal-2: Dump
Hexadecimal-4: Dump
hp-roman8, aliases and source: Tabular
hu: Tabular

IBM-PC charset, and CR-LF surface: Requests
IBM037, aliases and source: Tabular
IBM038, aliases and source: Tabular
IBM1004, aliases and source: Tabular
IBM1026, aliases and source: Tabular
IBM1047, aliases and source: Tabular
IBM256, aliases and source: Tabular
IBM273, aliases and source: Tabular
IBM274, aliases and source: Tabular
IBM275, aliases and source: Tabular
IBM277, aliases and source: Tabular
IBM278, aliases and source: Tabular
IBM280, aliases and source: Tabular
IBM281, aliases and source: Tabular
IBM284, aliases and source: Tabular
IBM285, aliases and source: Tabular
IBM290, aliases and source: Tabular
IBM297, aliases and source: Tabular
IBM367: Tabular
IBM420, aliases and source: Tabular
IBM423, aliases and source: Tabular
IBM424, aliases and source: Tabular
ibm437: IBM-PC
IBM437, aliases and source: Tabular
IBM500, aliases and source: Tabular
IBM819: Tabular
IBM819, and CR-LF surface: IBM-PC
IBM850, aliases and source: Tabular
IBM851, aliases and source: Tabular
IBM852, aliases and source: Tabular
IBM855, aliases and source: Tabular
IBM857, aliases and source: Tabular
IBM860, aliases and source: Tabular
IBM861, aliases and source: Tabular
IBM862, aliases and source: Tabular
IBM863, aliases and source: Tabular
IBM864, aliases and source: Tabular
IBM865, aliases and source: Tabular
IBM868, aliases and source: Tabular
IBM869, aliases and source: Tabular
IBM870, aliases and source: Tabular
IBM871, aliases and source: Tabular
IBM875, aliases and source: Tabular
IBM880, aliases and source: Tabular
IBM891, aliases and source: Tabular
IBM903, aliases and source: Tabular
IBM904, aliases and source: Tabular
IBM905, aliases and source: Tabular
IBM912: Tabular
IBM918, aliases and source: Tabular
Icon-QNX, and aliases: Icon-QNX
iconv: iconv
iconv, not in requests: Charset overview
IEC_P27-1, aliases and source: Tabular
INIS, aliases and source: Tabular
INIS-8, aliases and source: Tabular
INIS-cyrillic, aliases and source: Tabular
INVARIANT, aliases and source: Tabular
irv: Tabular
ISO 5426, a charset: ISO 5426 and ANSEL
ISO-10646-UCS-2, and aliases: UCS-2
ISO-10646-UCS-4, and aliases: UCS-4
ISO-8859-1, aliases and source: Tabular
ISO-8859-10, aliases and source: Tabular
ISO-8859-13, aliases and source: Tabular
ISO-8859-14, aliases and source: Tabular
ISO-8859-15, aliases and source: Tabular
ISO-8859-2, aliases and source: Tabular
ISO-8859-3, aliases and source: Tabular
ISO-8859-4, aliases and source: Tabular
ISO-8859-5, aliases and source: Tabular
ISO-8859-6, aliases and source: Tabular
ISO-8859-7, aliases and source: Tabular
ISO-8859-8, aliases and source: Tabular
ISO-8859-9, aliases and source: Tabular
iso-baltic: Tabular
iso-celtic: Tabular
iso-ir-10: Tabular
iso-ir-100: Tabular
iso-ir-101: Tabular
iso-ir-102: Tabular
iso-ir-109: Tabular
iso-ir-11: Tabular
iso-ir-110: Tabular
iso-ir-111: Tabular
iso-ir-121: Tabular
iso-ir-122: Tabular
iso-ir-123: Tabular
iso-ir-126: Tabular
iso-ir-127: Tabular
iso-ir-13: Tabular
iso-ir-138: Tabular
iso-ir-139: Tabular
iso-ir-14: Tabular
iso-ir-141: Tabular
iso-ir-143: Tabular
iso-ir-144: Tabular
iso-ir-146: Tabular
iso-ir-147: Tabular
iso-ir-148: Tabular
iso-ir-15: Tabular
iso-ir-150: Tabular
iso-ir-151: Tabular
iso-ir-152: Tabular
iso-ir-153: Tabular
iso-ir-154: Tabular
iso-ir-155: Tabular
iso-ir-157: Tabular
iso-ir-158: Tabular
iso-ir-16: Tabular
iso-ir-17: Tabular
iso-ir-170: Tabular
iso-ir-179: Tabular
iso-ir-179a: Tabular
iso-ir-18: Tabular
iso-ir-19: Tabular
iso-ir-199: Tabular
iso-ir-2: Tabular
iso-ir-203: Tabular
iso-ir-21: Tabular
iso-ir-25: Tabular
iso-ir-27: Tabular
iso-ir-37: Tabular
iso-ir-4: Tabular
iso-ir-47: Tabular
iso-ir-49: Tabular
iso-ir-50: Tabular
iso-ir-51: Tabular
iso-ir-54: Tabular
iso-ir-55: Tabular
iso-ir-57: Tabular
iso-ir-6: Tabular
iso-ir-6: ASCII
iso-ir-60: Tabular
iso-ir-61: Tabular
iso-ir-69: Tabular
iso-ir-8-1: Tabular
iso-ir-8-2: Tabular
iso-ir-84: Tabular
iso-ir-85: Tabular
iso-ir-86: Tabular
iso-ir-88: Tabular
iso-ir-89: Tabular
iso-ir-9-1: Tabular
iso-ir-9-2: Tabular
iso-ir-90, not recognised by recode: Tabular
iso-ir-93: Tabular
iso-ir-94: Tabular
iso-ir-95: Tabular
iso-ir-96: Tabular
iso-ir-98: Tabular
ISO646-CA: Tabular
ISO646-CA2: Tabular
ISO646-CN: Tabular
ISO646-CU: Tabular
ISO646-DE: Tabular
ISO646-DK: Tabular
ISO646-ES: Tabular
ISO646-ES2: Tabular
ISO646-FI: Tabular
ISO646-FR: Tabular
ISO646-FR1: Tabular
ISO646-GB: Tabular
ISO646-HU: Tabular
ISO646-IT: Tabular
ISO646-JP: Tabular
ISO646-JP-OCR-B: Tabular
ISO646-KR: Tabular
ISO646-NO: Tabular
ISO646-NO2: Tabular
ISO646-PT: Tabular
ISO646-PT2: Tabular
ISO646-SE: Tabular
ISO646-SE2: Tabular
ISO646-US: Tabular
ISO646-YU: Tabular
ISO8859-1: Tabular
ISO8859-10: Tabular
ISO8859-13: Tabular
ISO8859-14: Tabular
ISO8859-15: Tabular
ISO8859-2: Tabular
ISO8859-3: Tabular
ISO8859-4: Tabular
ISO8859-5: Tabular
ISO8859-6: Tabular
ISO8859-7: Tabular
ISO8859-8: Tabular
ISO8859-9: Tabular
isoir91: Tabular
isoir92: Tabular
ISO_10367-box, aliases and source: Tabular
ISO_10646: UCS-4
ISO_2033-1983, aliases and source: Tabular
ISO_5427(1981): Tabular
ISO_5427, aliases and source: Tabular
ISO_5427-ext, aliases and source: Tabular
ISO_5428(1980): Tabular
ISO_5428, aliases and source: Tabular
ISO_646.basic(1983): Tabular
ISO_646.basic, aliases and source: Tabular
ISO_646.irv(1983): Tabular
ISO_646.irv(1991): Tabular
ISO_646.irv, aliases and source: Tabular
ISO_6937-2-25, aliases and source: Tabular
ISO_6937-2-add, not recognised by recode: Tabular
ISO_8859-1: Tabular
ISO_8859-1(1987): Tabular
ISO_8859-10: Tabular
ISO_8859-10(1993): Tabular
ISO_8859-13: Tabular
ISO_8859-13(1998): Tabular
ISO_8859-14: Tabular
ISO_8859-14(1998): Tabular
ISO_8859-15: Tabular
ISO_8859-15(1998): Tabular
ISO_8859-2: Tabular
ISO_8859-2(1987): Tabular
ISO_8859-3: Tabular
ISO_8859-3(1988): Tabular
ISO_8859-4: Tabular
ISO_8859-4(1988): Tabular
ISO_8859-5: Tabular
ISO_8859-5(1988): Tabular
ISO_8859-6: Tabular
ISO_8859-6(1987): Tabular
ISO_8859-7: Tabular
ISO_8859-7(1987): Tabular
ISO_8859-8: Tabular
ISO_8859-8(1988): Tabular
ISO_8859-9: Tabular
ISO_8859-9(1989): Tabular
ISO_8859-supp, aliases and source: Tabular
ISO_9036: Tabular
IT, aliases and source: Tabular

Java: Java
JIS_C6220-1969: Tabular
JIS_C6220-1969-jp, aliases and source: Tabular
JIS_C6220-1969-ro, aliases and source: Tabular
JIS_C6226-1978, not recognised by recode: Tabular
JIS_C6229-1984-a, aliases and source: Tabular
JIS_C6229-1984-b, aliases and source: Tabular
JIS_C6229-1984-b-add, aliases and source: Tabular
JIS_C6229-1984-hand, aliases and source: Tabular
JIS_C6229-1984-hand-add, aliases and source: Tabular
JIS_C6229-1984-kana, aliases and source: Tabular
JIS_X0201, aliases and source: Tabular
JIS_X0212-1990, not recognised by recode: Tabular
jp: Tabular
jp-ocr-a: Tabular
jp-ocr-b: Tabular
jp-ocr-b-add: Tabular
jp-ocr-hand: Tabular
jp-ocr-hand-add: Tabular
js: Tabular
JUS_I.B1.002, aliases and source: Tabular
JUS_I.B1.003-mac, aliases and source: Tabular
JUS_I.B1.003-serb, aliases and source: Tabular

Kamenicky: Others
katakana: Tabular
KEYBCS2: Others
KOI-7, aliases and source: Tabular
KOI-8, aliases and source: Tabular
KOI-8_CS2: Others
KOI-8_L2: Tabular
KOI8-R, aliases and source: Tabular
KOI8-RU, aliases and source: Tabular
KOI8-U, aliases and source: Tabular
koi8l2: Tabular
KSC5636, aliases and source: Tabular
KS_C_5601-1987, not recognised by recode: Tabular

l1: Tabular
l2: Tabular
l3: Tabular
l4: Tabular
l5: Tabular
L6: Tabular
l7: Tabular
l8: Tabular
l9: Tabular
lap: Tabular
LaTeX, a charset: LaTeX
Latin-1: ISO 8859
latin-greek, aliases and source: Tabular
Latin-greek-1, aliases and source: Tabular
latin-lap: Tabular
latin1: Tabular
latin1-2-5: Tabular
latin2: Tabular
latin3: Tabular
latin4: Tabular
latin5: Tabular
latin6: Tabular
latin7: Tabular
latin8: Tabular
latin9: Tabular
lin: African
lingala: African
ltex: LaTeX

mac: Tabular
mac: Mac OS
mac-is, aliases and source: Tabular
macce: Tabular
macce: Mac OS
macedonian: Tabular
macintosh, a charset, and its aliases: Mac OS
macintosh, aliases and source: Tabular
macintosh_ce, aliases and source: Tabular
macintosh_ce, and its aliases: Mac OS
mnemonic, an alias for RFC1345 charset: Tabular
ms-ansi: Tabular
ms-arab: Tabular
ms-cyrl: Tabular
ms-ee: Tabular
ms-greek: Tabular
ms-hebr: Tabular
ms-turk: Tabular
MSZ_7795.3, aliases and source: Tabular
Mule, a charset: Mule

NATS-DANO, aliases and source: Tabular
NATS-DANO-ADD, aliases and source: Tabular
NATS-SEFI, aliases and source: Tabular
NATS-SEFI-ADD, aliases and source: Tabular
NC_NC00-10(81): Tabular
NC_NC00-10, aliases and source: Tabular
next: Tabular
NeXTSTEP, aliases and source: Tabular
NF_Z_62-010, aliases and source: Tabular
NF_Z_62-010_(1973), aliases and source: Tabular
no: Tabular
no2: Tabular
NS_4551-1, aliases and source: Tabular
NS_4551-2, aliases and source: Tabular

o1: Dump
o2: Dump
o4: Dump
Octal-1: Dump
Octal-2: Dump
Octal-4: Dump
os2latin1: Tabular

pc: IBM-PC
pcl2: Tabular
pclatin2: Tabular
PT, aliases and source: Tabular
PT2, aliases and source: Tabular

QNX, an alias for a charset: Icon-QNX
quote-printable: MIME
Quoted-Printable: MIME

r8: Tabular
ref: Tabular
RFC1345, a charset, and its aliases: Tabular
roman8: Tabular
rune: UCS-2

sami, aliases and source: Tabular
sango: African
se: Tabular
se2: Tabular
SEN_850200_B, aliases and source: Tabular
SEN_850200_C, aliases and source: Tabular
serbian: Tabular
SS636127: Tabular
ST_SEV_358-88: Tabular
swabytes: Permutations

t-bambara: African
t-bra: African
t-ewondo: African
t-fra: African
t-francais: African
t-fulfude: African
t-lin: African
t-lingala: African
t-sango: African
t-wolof: African
T.101-G2, not recognised by recode: Tabular
T.61-7bit, aliases and source: Tabular
T.61-8bit, not recognised by recode: Tabular
T1: Others
TCVN, for Vietnamese: Vietnamese
test15: Test
test16: Test
test7: Test
test8: Test
texi: Texinfo
Texinfo, a charset: Texinfo
Texte: Texte
TF-16: UTF-16
TF-7: UTF-7
TF-8: UTF-8
ti: Texinfo
txte: Texte

u2: UCS-2
u4: UCS-4
u6: UTF-16
u7: UTF-7
u8: UTF-8
UCS: Universal
UCS-2: UCS-2
UCS-4: UCS-4
uk: Tabular
Unicode, an alias for UTF-16: UTF-16
UNICODE-1-1-UTF-7, and aliases: UTF-7
us: Tabular
US-ASCII: Tabular
us-dk, not recognised by recode: Tabular
UTF-1: Universal
UTF-16, and aliases: UTF-16
UTF-7: UTF-7
UTF-8: UTF-8
UTF-8, aliases: UTF-8

VIQR: Vietnamese
VISCII: Vietnamese
VN1, maybe not available: Vietnamese
VN2, maybe not available: Vietnamese
VN3, maybe not available: Vietnamese
VNI: Vietnamese
VPS: Vietnamese

WinBaltRim: Tabular
windows-1250: Tabular
windows-1251: Tabular
windows-1252: Tabular
windows-1253: Tabular
windows-1254: Tabular
windows-1255: Tabular
windows-1256: Tabular
windows-1257: Tabular
wolof: African

X0201: Tabular
x0201-7: Tabular
x1: Dump
x2: Dump
x4: Dump
XML-standalone: HTML

yu: Tabular

Jump to:   0   1   2   3   4   5   6   8   9  
A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y  

Table of Contents



Because iconv can vary from system to system, and is itself a complex tool, it can cause recode to behave in unexpected ways. Therefore, by default it is only used when a conversion would not be possible without it. To request that iconv be used, use --prefer-iconv; see See prefer-iconv. Conversely, you can disable it with the -x: option; see See disable-iconv.


I’m not prone to accept a charset you just invented, and which nobody uses yet: convince your friends and community first!


In previous versions of Recode, a single colon ‘:’ was used instead of the two dots ‘..’ for separating charsets, but this created problems, because colons are allowed in official charset names.


More precisely, pc is an alias for the charset IBM-PC.


Both before and after may be omitted, in which case the double dot separator is mandatory. This is not very useful, as the recoding reduces to a mere copy in that case.


MS-DOS is one of those systems for which the default charset has implied surfaces, CR-LF here. Such surfaces are automatically removed or applied whenever the default charset is read or written, exactly as it would go for any other charset. In the example above, on such systems, the hexadecimal surface would then replace the implied surfaces. For adding an hexadecimal surface without removing any, one should write the request as ‘/../x’.


The author of Recode by far prefer expressing numbers in decimal than octal or hexadecimal, as he considers that the current state of technology should not force users anymore in such strange things. But Unicode people see things differently, to the point Recode cannot escape being tainted with some hexadecimal.


There are still some cases of ambiguous output which are rather difficult to detect, and for which the protection is not active.


The minimality of an UTF-8 encoding is guaranteed on output, but currently, it is not checked on input.


Another approach would have been to define the level symbols as masks instead, and to give masks to threshold setting routines, and to retain all errors—yet I never met myself such a need in practice, and so I fear it would be overkill. On the other hand, it might be interesting to maintain counters about how many times each kind of error occurred.


It is not probable that Recode will ever support UTF-1.


This is when the goal charset allows for 16-bits. For shorter charsets, the ‘--strict’ (‘-s’) option decides what happens: either the character is dropped, or a reversible mapping is produced on the fly.


On DOS/Windows, stock shells do not know that apostrophes quote special characters like |, so one needs to use double quotes instead of apostrophes.


This convention replaced an older one saying that up to 4 immediately preceeding pairs of zero bytes, going backward, are to be considered as part of the end of line and not interpreted as ::.


There are supposed to be seven words in this case. So, one is missing.


Look at one of the following sentences (the second has to be interpreted with the ‘-c’ option):

"Ai"e!  Voici le proble`me que j'ai"
Ai:e!  Voici le proble`me que j'ai:

There is an ambiguity between an the small animal, and the indicative future of avoir (first person singular), when followed by what could be a diaeresis mark. Hopefully, the case is solved by the fact that an apostrophe always precedes the verb and almost never the animal.


I did not pay attention to proper nouns, but this one showed up as being fairly evident.


Usually, quail means quail egg in Japanese, while egg alone is usually chicken egg. Both quail egg and chicken egg are popular food in Japan. The quail input system has been named because it is smaller than the previous EGG system. As for EGG, it is the translation of TAMAGO. This word comes from the Japanese sentence takusan matasete gomennasai, meaning sorry to have let you wait so long. Of course, the publication of EGG has been delayed many times… (Story by Takahashi Naoto)


These are mere examples to explain the concept, Recode only has Base64 and CR-LF, actually.


If strict mapping is requested, another efficient device will be used instead of a permutation.