"Fossies" - the Fresh Open Source Software Archive
Member "SAOImageDS9/tcllib/README.developer" (13 Nov 2019, 13773 Bytes) of package /linux/misc/ds9.8.1.tar.gz:
As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard
) with prefixed line numbers.
Alternatively you can here view
the uninterpreted source code file.
1 RCS: @(#) $Id: README.developer,v 1.6 2009/06/02 22:49:55 andreas_kupries Exp $
3 Welcome to the tcllib, the Tcl Standard Library.
9 This README is intended to be a guide to the tools available to a
13 working on Tcllib to help him with his tasks, i.e. making the tasks easier
14 to perform. It is our hope that this will improve the quality of even
15 non-released revisions of Tcllib, and make the work of the release
16 manager easier as well.
21 The intended audience are, first and foremost, developers beginning to
22 work on Tcllib. To an experienced developer this document will be less
23 of a guide and more of a reference. Anybody else interested in working
24 on Tcllib is invited as well.
27 Directory hierarchy and file basics
30 The main directories under the tcllib top directory are
34 and apps/
36 Each directory FOO under modules/ represents one package, sometimes
37 more. In the case of the latter the packages are usually related in
38 some way. Examples are the base64, math, and struct modules, with
39 loose (base64) to strong (math) relations between the packages.
41 Examples associated with a module FOO, if there are any, are placed
42 into the directory
46 Any type of distributable application can be found under apps/,
47 together with their documentation, if any. Note that the apps/
48 directory is currently not split into sub-directories.
50 Regarding the files in Tcllib, the most common types found are
52 .tcl Tcl code for a package.
54 .man Documentation for a package, in doctools format.
56 .test Test suite for a package, or part of. Based on tcltest.
58 .bench Performance benchmarks for a package, or part of.
59 Based on modules/bench
61 .pcx Syntax rules for TclDevKit's tclchecker. Using these
62 rules allows tclchecker to check the use of commands
63 of a Tcllib package X without having to scan the
64 implementation of X, i.e. its .tcl files.
67 Adding a new module
70 Assuming that FOO is the name of the new module, and T is the toplevel
71 directory of the Tcllib sources
73 (1) Create the directory T/modules/FOO and put all the files of
74 the module into it. Note:
76 * The file 'pkgIndex.tcl' is required.
78 * Implementation files should have the extension '.tcl',
81 * If available, documentation should be in doctools format,
82 and the files should have the extension '.man' for SAK to
83 recognize them.
85 * If available the testsuite(s) should use 'tcltest' and the
86 general format as used by the other modules in Tcllib
87 (declaration of minimally needed Tcl, tcltest, supporting
88 packages, etc.). The file(s) should have the extension
89 '.test' for SAK to recognize them.
91 Note that an empty testsuite, or a testsuite which does not
92 perform any tests is less than useful and will not be
95 * If available the benchmark(s) should use 'bench' and the
96 general format as used by the other modules in Tcllib. The
97 file(s) should have the extension '.bench' for SAK to
98 recognize them.
100 * Other files can be named and placed as the module sees fit.
102 (2) If the new module has an example application A which is
103 polished enough for general use, put this application into the
104 file "T/apps/A.tcl", and its documentation into the file
105 "T/apps/A.man". While documentation for the application is
106 optional, it is preferred.
108 For examples which are not full-fledged applications, a
109 skeleton, or not really polished for use, etc., create the
110 directory T/examples/FOO/ and put them there.
112 A key difference is what happens to them on installation, and
113 what the target audience is.
115 The examples are for developers using packages in Tcllib,
116 whereas the applications are also for users of Tcllib which do
117 not have an interest in developing for and with it. As such,
118 they are installed as regular commands, accessible through the
119 PATH, and example files are not installed.
121 (3) To make Tcllib's installer aware of FOO, edit the file
125 Add a line 'Module FOO $impaction $docaction $exaction'. The
126 various actions describe to the installer how to install the
127 implementation files, the documentation, and the examples.
129 Add a line 'Application A' for any application A which was
130 added to T/apps for FOO.
132 The following actions are available:
136 _tcl - Copy all .tcl files in T/modules/FOO into the installation.
137 _tcr - See above, does it for .tcl files in subdirectories as well.
138 _tci - _tcl + Copying of a tclIndex - special to modules 'math', 'control'.
139 _msg - _tcl + Copying of subdir 'msgs' - special to modules 'dns', 'log'.
140 _doc - _tcl + Copying of subdir 'mpformats' - special to module 'doctools'.
141 _tex - _tcl + Copying of .tex files - special to module 'textutil'.
143 The _null action, see below, is available in principle
144 too, but a module without implementation does not make
149 _null - Module has no documentation, do nothing.
150 _man - Process the .man files in T/modules/FOO and
151 install the results (nroff and/or HTML) in the
152 proper location, as given to the installer.
156 _null - Module has no examples, do nothing
157 _exa - Copy the directory T/examples/FOO
158 (recursively) to the install location for
162 Testing modules
165 To run the testsuite of a module FOO in tcllib use the 'test run'
166 argument of sak.tcl, like so:
168 % pwd
171 % ./sak.tcl test run FOO
172 or % ./sak.tcl test run modules/FOO
174 To run the testsuites of all modules either invoke 'test run' without a
175 module name, or use 'make test'. The latter assumes that configure was
176 run for Tcllib before, i.e.:
178 % ./sak.tcl test run
179 or % ./sak.tcl test run
180 % make test
182 In all of the above cases the result will be a combination of progress
183 display and testsuite log, showing for each module the tests that pass
184 or failed and how many of each in a summary at the end.
186 To get a detailed log, it is necessary to invoke 'test run' with
187 additional options.
189 First example:
190 % ./sak.tcl test run -l LOG FOO
192 This shows the same short log on the terminal, and writes a detailed
193 log to the file LOG.log, and excerpts to other files (LOG.summary,
194 LOG.failures, etc.).
196 Second example:
197 % ./sak.tcl test run -v FOO
198 % make test > LOG
200 This writes the detailed log to stdout, or to the file LOG, instead of
201 the short log. In all cases, the detailed log contains a list of all
202 test cases executed, which failed, and how they failed (expected
203 versus actual results).
206 The commands
207 % make test
208 and % make test > LOG
210 are able to generate different output (short vs long log) because the
211 Makefile target contains code which detects that stdout has been
212 redirected to a file and acts accordingly.
214 Non-developers should reports problems in Tcllib's bug tracker.
215 Information about its location and the relevant category can be found
216 in the section 'BUGS, IDEAS, FEEDBACK' of the manpage of the module
217 and/or package.
219 Module documentation
222 The main format used for the documentation of packages in Tcllib is
223 'doctools', the support packages of which are part of Tcllib, see the
224 module 'doctools'.
226 To convert this documentation to HTML or nroff manpages, or some other
227 format use the 'doc' argument of sak.tcl, like so:
229 % pwd
232 % ./sak.tcl doc html FOO
233 or % ./sak.tcl doc html modules/FOO
235 The result of the conversion can be found in the newly-created 'doc'
236 directory in the current working directory.
238 The set of formats the documentation can be converted into can be
239 queried via
241 % ./sak.tcl help doc
244 To convert the documentation of all modules either invoke 'test run'
245 without a module name, or use 'make html-doc', etc.. The latter
246 assumes that configure was run for Tcllib before, i.e.:
248 % ./sak.tcl doc html
249 % make html-doc
251 Note the special format 'validate'. Using this format does not convert
252 the documentation to anything (and the sub-directory 'doc' will not be
253 created), it just checks that the documentation is syntactically
254 correct. I.e.
256 % ./sak.tcldoc validate modules/FOO
257 % ./sak.tcldoc validate
260 Validating modules
263 Running the testsuite of a module, or checking the syntax of its
264 documentation (see the previous sections) are two forms of validation.
266 The 'validate' command of sak.tcl provides a few more. The online
267 documentation of this command is available via
269 % ./sak.tcl help validate
271 The validated parts are man pages, testsuites, version information,
272 and syntax. The latter only if various static syntax checkers are
273 available on the PATH, like TclDevKit's tclchecker.
275 Note that testsuite validation is not the execution of the testsuites,
276 only if a package has a testsuite or not.
278 It is strongly recommended to validate a module before committing any
279 type of change made to it.
281 It is recommended to validate all modules before committing any type
282 of change made to one of them. We have package inter-dependencies
283 between packages in Tcllib, thus changing one package may break
284 others, and just validating the changed package will not catch such
288 Writing Tests
291 While a previous section talked about running the testsuite for a
292 module and the packages therein this has no meaning if the module in
293 question has no testsuites at all.
295 This section gives a very basic overview on methodologies for writing
296 tests and testsuites.
298 First there are "drudgery" tests. Written to check absolutely basic
299 assumptions which should never fail.
303 For a command FOO taking two arguments, three tests calling it
304 with zero, one, and three arguments. The basic checks that the
305 command fails if it has not enough arguments, or too many.
307 After that come the tests checking things based on our knowledge of
308 the command, about its properties and assumptions. Some examples based
309 on the graph operations added during Google's Summer of Code 2009.
311 ** The BellmanFord command in struct::graph::ops takes a
312 _startnode_ as argument, and this node should be a node of the
313 graph. equals one test case checking the behavior when the
314 specified node is not a node a graph.
316 This often gives rise to code in the implementation which
317 explicitly checks the assumption and throws a nice error.
318 Instead of letting the algorithm fails later in some weird
319 non-deterministic way.
321 Such checks cannot be done always. The graph argument for
322 example is just a command in itself, and while we expect it to
323 exhibit a certain interface, i.e. set of sub-commands aka
324 methods, we cannot check that it has them, except by actually
325 trying to use them. That is done by the algorithm anyway, so
326 an explicit check is just overhead we can get by without.
328 ** IIRC one of the distinguishing characteristic of either
329 BellmanFord and/or Johnson is that they are able to handle
330 negative weights. Whereas Dijkstra requires positive weights.
332 This induces (at least) three testcases ... Graph with all
333 positive weights, all negative, and a mix of positive and
334 negative weights.
336 Thinking further does the algorithm handle the weight '0' as
337 well ? Another test case, or several, if we mix zero with
338 positive and negative weights.
340 ** The two algorithms we are currently thinking about are about
341 distances between nodes, and distance can be 'Inf'inity,
342 i.e. nodes may not be connected. This means that good test
343 cases are
345 (1) Strongly connected graph
346 (2) Connected graph
347 (3) Disconnected graph.
349 At the extremes of (1) and (3) we have the fully connected
350 graphs and graphs without edges, only nodes, i.e. completely
353 ** IIRC both of the algorithms take weighted arcs, and fill in a
354 default if arcs are left unweighted in the input graph.
356 This also induces three test cases:
358 (1) Graph will all arcs with explicit weights.
359 (2) Graph without weights at all.
360 (3) Graph with mixture of weighted and unweighted graphs.
363 What was described above via examples is called 'black-box' testing.
364 Test cases are designed and written based on our knowledge of the
365 properties of the algorithm and its inputs, without referencing a
366 particular implementation.
368 Going further, a complement to 'black-box' testing is 'white-box'. For
369 this we know the implementation of the algorithm, we look at it and
370 design our tests cases so that they force the code through all
371 possible paths in the implementation. Wherever a decision is made we
372 have a test cases forcing a specific direction of the decision, for
373 all possible directions.
375 In practice I often hope that the black-box tests I have made are
376 enough to cover all the paths, obviating the need for white-box tests.
378 So, if you, dear reader, now believe that writing tests for an
379 algorithm takes at least as much time as coding the algorithm, and
380 often more time, then you are completely right. It does. Much more
381 time. See for example also http://sqlite.org/testing.html, a writeup
382 on how the Sqlite database engine is tested.
386 An interesting connection is to documentation. In one direction, the
387 properties you are checking with black-box testing are properties
388 which should be documented in the algorithm man page. And conversely,
389 if you have documentation of properties of an algorithm then this is a
390 good reference to base black-box tests on.
392 In practice test cases and documentation often get written together,
393 cross-influencing each other. And the actual writing of test cases is
394 a mix of black and white box, possibly influencing the implementation
395 while writing the tests. Like writing test for 'startnode not in input
396 graph' serving as reminder to put in a check for this into the code.