Friday, December 7, 2007

Lightning Brain Podcast: Click here to listen to "Refactoring ExtendScripts"

Today we'll talk about cleaning up scripts, and as a bonus, we'll work on a script that adds a 'hand-written' quality to text: changing something like this:

Before Jitter


Before Jitter

By clicking the link provided in this sentence, you can download this real-life example of a small script being refactored, from experimental form to a cleaner form - click here to download

Listen to or read the podcast transcript for more info...

On to the podcast - click here to listen to it!

So, the script works, everyone is happy - but maybe you are not done yet...

Imagine that you will have to pick that same script up again twelve months from now. How much time will it take you to 'get back up to speed' and rebuild a mental picture of the script and what it is doing? Probably it will take you many hours of browsing, debugging and fiddling around before you regain enough understanding of the script's inner workings to make a 'safe' change.

Now, what if the circumstances have changed, and the script needs to be adjusted to cope with a new environment - it might need to be moved from InDesign to InDesign Server, there might be new requirements...

How about spending some time now - when your head is still filled with knowledge about how it all works - and make sure the script becomes more future-proof.

In this podcast, I'll list a few of the techniques I personally use to 'future-proof' my scripts.

Before diving into the techniques, I need to explain how we approach ExtendScript development here at Rorohiko; we've adapted an approach that delivers very good quality at a reasonable price - we're not cheap, but as opposed to many other custom developments, our solutions work and work well.

When we're developing custom scripts, we use a 'no cure, no pay' approach, for a number of reasons.

The main reason is, that for this type of development, building a sufficiently accurate quote often costs us more than the development itself.

For an accurate quote, we would first and foremost need an accurate, extensive project brief.

But in our business we're often dealing with creative, fairly a-technical people, and we invariably found it very hard, even impossible to zoom in on an accurate enough technical description of the functionality being looked for.

On top of that, what we're asked to do is often at odds with what is really needed. We're often asked to develop a specific solution, rather than being asked to solve a problem.

From experience we've learned that it pays to dig deeper, and try and find out what the underlying problem is - quite often the asked-for 'solution' only cures a symptom, and leaves the underlying problem unfixed.

The most efficient way we found to get the technical information we need from creative people is to use an iterative approach. Instead nagging people and trying to wring a technical brief out of them, we put the cart before the horse instead, and we create something, anything, as good as we can, based on the still limited understanding we have of the problem at hand.

We present our customer with the attempted solution, and get their feedback - it is much easier for them to explain what is wrong or what is missing from some tangible, real software, rather than trying to come up with blueprint.

Based on the feedback, we adjust the solution (or throw it out and start over), and we go through a few iterations, until we have the thing sussed.

Eventually, we reach a good, smooth solution, and by that time we also know exactly what the cost of that solution is.

At that point, the 'no cure, no pay' system kicks in: our customer can choose to purchase and continue to use the software, or alternatively, in case our solution were to not live up to its promise, the software is simply destroyed, and there is no cost to the customer.

This approach works really well, but the successive iterations cause the software to go through a few swings and roundabouts, and along the way, grow a whole collection of warts if we're not careful.

We'll typically spend some time refactoring the scripts to make sure they're future-proof and self-explanatory - making a small investment of time now in return for a substantial time-saving later.

Here are some of the things we do:

1) Don't rely on app.activeDocument

While in the heat of experimentation, iteration and script development, it's easy to assume that the functionality being created will be applied to the current document, and hence to refer to app.activeDocument

However, there are sizable benefits to removing the reliance on the active document. Our scripts will typically contain a number of functions, and whenever a function uses app.activeDocument, we rework that function to not use 'app.activeDocument', but instead take a 'document' parameter.

The idea is that you get hold of the document 'under consideration' in one single spot in the script, and that from then on you pass a document parameter to any function that is supposed to work on that document.

The two biggest advantages are:

First, your script becomes a lot easier to convert to an InDesign Server environment (where there is no such thing as app.activeDocument), and second, all of your functions have now suddenly become much more re-usable: they can now also be used when the document to be modified is not the active document.

For example, the document might be a temporary, invisible document you're opening in the background - and by NOT using app.activeDocument, you can pass such a document to your functions as a parameter.

2) Test, test, test your preconditions

When using a function, it pays to add tests for preconditions - make sure all parameters being passed as what they are supposed to be, and display an error message if they are not. Whenever possible, we leave all this testing code in the script - so if something goes wrong at the customer's end we get good, specific information about where things go off the rails.

Typically, we'll have a global constant - something like kDebugging - which can be set to true or false to indicate debugging more.

We'll also add a messaging function similar to alert() which can display a dialog box with a message. The difference with alert() is that the dialog box is conditional on kDebugging being set to true. If kDebugging is set to false, the message is ignored.

And then we'll test all the function parameters being passed into a function. Is the document non-null? Is it instanceof Document? Is the percentage a number between 0 and 100? If any of these tests fail, a debug message is emitted, and the function 'bails out'. This guarantees that any unexpected condition can be caught early on.

This works well by wrapping most of the function body inside a do{}while(false) construct, which mis-uses the do-while loop to build a construct that allows a 'ladder-like' function construction.

Inside the do{}while(false) there is a whole series of if tests which verify if all is well, and display a debug message followed by a 'break' statement if not. The break causes the function to 'fall off' the ladder for any failing precondition. The debug message being displayed is specific enough to pinpoint the spot where things went wrong - it will include the name of the function where the problem occurs, and a short description of what is wrong.

This construct is quite similar to using try-catch, but it is 'cheaper' in a number of respects; it causes less overhead than using try-catch, and does not cause the InDesign Debug version to emit assert statements during script execution.

3) Do not spend time optimizing unless it is really necessary.

Now, you'd think that all that debug code from the previous point must cause a lot of overhead.

Well, turns out that is not true most of the time - a typical script will spend 95% of its time in 5% of the script, and all that debug code has very little impact on the script execution time.

In practice, we'll leave all our debug code in the script - all we might do is to set kDebugging (or whatever the constant is called) to false - but even that we often don't do: it's better to be informed of unexpected circumstances, than to have a script silently and mysteriously fail.

Only when there are speed issues might we consider removing some debugging code - but only if we can clearly see that this code is part of the bottleneck.

The current ExtendScript Toolkit contains a nice profiling tool that allows you to see where a script is spending most of its time. Our recommendation is to not bother with optimizing unless there is a time issue, and when optimizing, use proper profiling to solve the bottleneck - but nothing else. Any debug code that you can leave alone should be left alone; it's part of your safety net.

It is very common for our scripts to have 50% or more debugging/testing code in them.

4) Avoid global variables

While experimenting, it is very common and easy to introduce some global variables to keep track of things.

However, global variables can be a recipe for disaster - especially when you need to revisit an older script and make some modifications to it.

Global variables represent a form of communication between different areas of your script - functions can communicate with one another by stuffing data into global variables, and getting it back out again.

Problem is: that type of interaction is very easy to get overlooked, and causes all kinds of unexpected side effects - for example, you add an extra call to a particular function somewhere, the function changes a value of some global variable, and then other functions that rely on that same variable go off the rails.

Because functions don't clearly 'advertise' what global data they consult or modify, it becomes very hard to keep track of interactions. That makes for fun debug sessions, chasing weird bugs after making a 'tiny change' to a year-old script.

Like everyone else, during the initial phase of a project, we often start out stuffing data into globals - but unless there is good reason to, we'll rework the script and move the global variables into function parameters. If there is a lot of data, we'll introduce one or more data structures which are then passed around as a parameter.

An example: we might be parsing a text string, and keep track of where we're at in a global variable gTextPos, and store the string in a global gParseText.

During cleanup, that will be reworked (or 'refactored' as it is often called) - we'll get rid of the globals, and instead we'll put the current 'parse state' into a JavaScript object with at least two attributes: parseText and textPos.

Then we pass that object to the relevant routines using a parameter - say 'parseState'.

This way it becomes immediately clear to the human reader of the script which routines access that data (they need the parameter) and which ones don't access that data (they don't need the parameter) - it's a self-enforcing cleanup. From this moment on, each function that needs access to that data does 'advertise' the fact via its parameter list.

Imagine every JavaScript function as a gob of code floating in space. Then imagine what outside factors influence the function's operation and how, in return, the function influences its environment. There are the parameters coming in at the top, the return value coming out at the bottom. Most of the time these two relations are pretty easy to see.

Then there are any global variables that are modified or consulted by the function - using globals leaves a lot of room for unseen interaction between the function and its environment. Things like app.something and $.something are also globals - they are provided by InDesign, but they are still globals.

The more 'isolated' you can make your functions, the easier it will be to re-use in a different script.

Functions that interact with global data are like a beating heart - very difficult to transplant because there is a lot of stuff to disconnect and reconnect.

Functions that take data via their parameters, and return data via their return value and/or via some of their parameters are much easier to transplant: a few easy connections to their environment; they easily snap in and out.

5) Each function should do one thing well

We always try to create functions that do one thing well; during the 'frantic' phase of a project we often end up with functions that do lots of stuff. These multi-headed monsters need to be divvied up into smaller functions - each doing just one thing. Functions that are initially called something like 'ImportFileAndColorFramesAndDeleteOverrun' are split up into multiple smaller functions.

This increases the chances of making things re-usable - any 'good' function eventually ends up in our growing function library and will be reused, which cuts down our development time on future projects. Multi-headed monsters are never reusable - so cutting them up has distinct advantages.

6) Name constants and move them to the header of the file for easier customization

During the trial and error phase, you'll typically add all kinds of literal constants to the code - it is worthwhile to try and isolate these constants from the code and move them to a seaparate section near the top.

This makes the script easier to adjust, and it also makes it more robust.

Now, if a certain string constant is used twice in the script, there seems to be little advantage to creating a symbolic constant for the string and then use the constant instead of the literal string. Many people think this is a pedantic use of constants - on cursory inspection, the two approaches look not all that different.

However, the advantage is that with a symbolic constant, typing errors can be immediately caught by the computer, whereas with literal strings the computer would not know that these two strings are supposed to be equal.

So, if you'd type two literal strings "TextFrame" and "TextFrome", the computer would accept that - but if you typed two symbolic constants kTextFrame and kTextFrome, the second one would be undefined and cause an error.

By clicking the link provided in this sentence, you can download a real-life example of a small script being refactored, from experimental form to a cleaner form - click here to download

Sunday, November 11, 2007

Virtual Group

In addition to being a software developer and trainer, I am also a member of a New Zealand-based team of business consultants, called the 'Virtual Group'. To better explain what the Virtual Group can do for an organisation, we'll be conducting a number of interviews with our team members.

The first interviewee is Bruce Holland - Bruce is an expert in revitalising large mature organisations.

I interviewed him today, and started a new Virtual Group blog/podcast which can be read and listened to by clicking here.

Monday, October 29, 2007

Lightning Brain Podcast: Click here to listen to "InDesign User Interfaces"

Welcome to another episode of the Lightning Brain Podcast. We'll talk a little bit about InDesign and user-interface code.

When it comes to extending InDesign, there are many options available: you could create a C++-based plug-in, you can create an ExtendScript (JavaScript) solution, you can build an AppleScript or VBScript-based solution, you can build a Flash-based UI and then use it in InDesign, you can glue some other development environment 'into' InDesign, and you can also make a hybrid of the aforementioned solutions...

Which approach to choose depends on what your need is, what portion of the project is user-interface functionality as opposed to faceless functionality, what development environments you're familiar with, what your potential users are willing to accept, what budgets (time, money, resources, people, testers,...) are available for the project, what the politics involved are,... As in all things automation, there is no single 'best' solution - it all depends.

In this podcast I want to describe an approach we've very successfully used for a few real-size, real-life projects, and what the advantages and disadvantages are.

I first want to include a little disclosure: Rorohiko resells the Active Page Items Developer product as a commercial solution, and Active Page Items is very much part of the approach described here - so you might think this is podcast is a veiled advertisement for Active Page Items. Well, it is - but you have to keep in mind that Active Page Items has been created and has grown out of our own need for such a tool - Active Page Items was first; commercializing it came later.

One of the things we've learned is that C++ development for InDesign can be quite expensive: simple principles and algorithms often take a surprising amount of code to express. I'd describe C++ development around InDesign as 'fluffy'. High-level concepts and patterns result in lots of classes and source code files and fairly large amounts of C++ code, much of which is often quite repetitive in a number of respects. Especially the development of user interface elements takes a lot of doing.

At present, the only way to currently get a really 'native' InDesign UI look is to use C++ - e.g. if you want to create floating palettes, with all their end-user flexibility (tearing off, parking to the side,...) you need to use C++.

From a UI perspective, C++ might be 'perfect', but often there are other approaches that could be classified as 'good enough'. They might not look as nice, but they might do the job.

For example, using ExtendScript with InDesign CS3 one can develop quite complex user interfaces which often are 'good enough'. ExtendScript development is easily an order of magnitude cheaper than C++ development - so if the parameters of the project at hand don't require an absolute perfect-looking interface with floating palettes, ExtendScript can be the way to go.

Often there are also user-interfaces that need something a little bit more complex than what can be accomplished using ExtendScript, yet don't need a full blown native InDesign user interface. That's where the 'hybrid' approach that we've been using comes in.

The approach we've chosen basically boils down to: REALbasic, Active Page Items, ExtendScript.

We create our more complex user interfaces in REALbasic. However, this is not an absolute requirement - on the whole, we might have used Java instead.

The main reasons for choosing REALbasic over Java are 1) that it allows us to create cross-platform code that looks sligthly more 'native' on Mac as well as on Windows 2) easy access to global floating windows (both on Mac and Windows) and 3) purely personal preference: I find I personally can build and implement user-interfaces faster with REALbasic than with Java.

Easy access to global floating windows is one feature of REALbasic that comes in really handy, for which I don't know whether Java offers an easy alternative.

Global floating windows are windows that remain 'on top', even if the application that owns them is not the foreground application. This is fairly important for the illusion we want to maintain.

Active Page Items is a fairly large C++ plug-in, which we extend as we need with new functionality.

One of its many functions is coordinating InDesign with external applications. Through Active Page Items, we are able to create an illusion that makes the external 'satellite' apps seem to be part of InDesign.

One of the tricks is to 'lock' InDesign in a modal mode while one of our REALbasic satellite apps is running. That creates the illusion that a dialog owned by the satellite app seems to belong to InDesign.

However, the modal mode is not real - it's a simulated modal mode, and while this simulated modal mode is active, InDesign is actually still very much 'alive' and able to execute ExtendScript code - so it is possible to create a 'live' session between the satellite app and InDesign while the user interacts with this 'simulated modal dialog', while 'locking out' the user from any undesirable interactions with InDesign.

The illusion is not perfect: on the Mac's Dock and on the Windows start bar it is fairly apparent some secondary app is running, but that is a cosmetic issue, and it does not really seem to annoy our end-users too much.

- Active Page Items is also used to manage menu items and context menu items. This is mostly because our solutions needed to support CS2 as well as CS3 - InDesign CS3's ExtendScript has all you need to create menu items and context menus, so if you have the luxury of an InDesign-CS3-only setup, you can stick with standard ExtendScript in that respect.

- For communication between the various disparate components, we use temporary files. This is a very low-tech and crude approach, but it works 'well enough'. In a future version of Active Page Items we might add support for a more 'high-tech' information exchange mechanism, but for now temp files do us just fine.

If you want to try things out for yourself, I've created a very small sample of such a hybrid solution; the source code to it comes as part of our Active Page Items Developer Toolkit. If you download the latest demo version of the Toolkit from our web site, you'll find my example code tucked away amongst the other examples.

Some of the scripts can also be viewed at the end of the blog entry.

The sample, which deals with overset text, has no real practical applications as such, but you should be able to see how it can be made into a practical solution for particular problems.

The sample performs the following function: it looks out for overset text. As soon as a text frame gets overset, some ExtendScript code jumps into action, and fires up an external application, which then shows the contents of the text frame in a scrolling text field. The idea is that the user edits the text down to a shorter version to stop the overset. Of course, is not a practical approach at all, but it does allow us to demonstrate the various techniques involved.

In the sample, the active bit of ExtendScript code is currently 'attached' to a 'dummy' page item that sits on the pasteboard. The page item is not meant to be printed or have any sensibly printable content; all it does is hold some script code. We call such a page item a 'controller'. The controller 'watches' one or more page items, and waits for the events to occur.

In a 'real' solution based on Active Page Items, we'd instead 'package' that script code into a so-called 'Scripted Plug-in', in a .spln file.

In this particular case the controller is set to watch all page items, and the 'interesting events' it might watch out for are

- an event called 'subjectModified-recomposed-overset' which occurs when any page item ends up being overset after something happened to it (user typed something, frame resized,...)

- an event called 'idle' which occurs at regular intervals. In this particular solution, we rarely look out for 'idle' events as to not unnecessarily tax the computer's performance. We only do so while InDesign is in 'simulated modal mode' when the satellite application is running.

So, when any page item becomes overset, the subjectModified-recomposed-overset event is captured by the controller.

The controller's ExtendScript then launches the external satellite application using a special Active Page Items method attached to the application object - app.launchWith().

app.launchWith() has a number of functions. The most common use is as an extension to the File.execute() method.

File.execute() is similar to double-clicking an icon in Explorer or in the Finder, and will pick the default application to open a particular document.

app.launchWith() allows you to designate a particular application to open a particular document with - it is more akin to drag-dropping a document file icon onto an application's icon.

On top of that, app.launchWith() has a special feature - it allows us to lock InDesign into simulated modal model for as long as the launched application continues to run.

That makes for a crude, yet effective way to synchronize a satellite app with InDesign: you launch the satellite application using app.launchWith(), and when the user clicks 'OK' in the dialog presented by the satellite application, the application simply exits. Active Page Items is monitoring the satellite app, and as soon as it sees it exit, it will release the simulated modal lock.

So, the controller's ExtendScript first writes the contents of the overset frame into a temporary text file.

It then uses app.launchWith() to tell the satellite app to open this temp text file and pick up the data being communicated.

The satellite app then runs until the user clicks OK in the dialog, after which it writes the new data to the same temporary text file. When the application exits, Active Page Items will release the simulated modal lock automatically.

While the satellite app is running and InDesign is in simulated modal mode, the controller is catching idle events (roughly once per second). During these events, we could perform more communication backwards and forwards with the satellite application (e.g. for live previews or so), but in this case, all we do is check the simulated modal lock: as long as that is not lifted, we know the app is still running and we do nothing.

When we notice that the simulated modal lock has disappeared upon receiving one of the idle events, we know it the user has clicked OK, the app has written the new data to the temp file, and has quit, and we know we can now read the returned data, and then we can stop looking for idle events - things come back to normal, with the controller only watching out for overset events.

This sample should give you a little bit of insight on how we approached some real-live projects with very good results.

The advantages we had were:
- fast development of a good-looking UI that was beyond what can be accomplished with ExtendScript.
- cross-platform (Mac & Win) support: the same code works on both platforms only minute amounts of conditional code.

The disadvantages:
- don't pay attention to the man behind the curtain. Global floating windows and simulated modal mode allow you to get close, but nothing identical to the real thing (an InDesign-generated dialog or palette). The satellite app is visible - we worked around that by giving it a good-looking icon.

On the whole, the disadvantages were acceptable for the particular projects, and as a result we were able to offer very high efficiency in realizing these projects.

Thanks for your attention!


// Example of using an external program for dialogs.
// This document needs a file called "" (on Mac)
// or "ExampleInDesignSatellite.exe" (on Windows) in the same folder
// as the document.
// This event handler handles subjectModified-recomposed-overset and
// idle events.
// In normal circumstances, the event filter is set to just
// subjectModified-recomposed-overset - i.e. the handler only
// activates when there is a text frame that has just recomposed,
// and shows overset
// When that happens, the handler below will launch an external
// program to edit the text frame contents, and also change the event
// filter to "idle" - causing repeated calls to this handler while
// the user is editing the text in the external program. The
// external program is launched using launchWith and a mode equal
// to 4 - meaning: InDesign is modal locked for the user until the
// external program terminates.
// So, what happens is that we repeatedly receive and handle
// idle events, until app.callExtension(0x90b6C,10003) returns
// false (meaning: not modal locked), which only happens when
// the external program has terminated.
// As soon as the external program terminates, we restore the
// normal event filter, and read the output of the external
// program to stuff into the text frame
// tempFile is used to communicate data to and from the
// external program
var tempFile = File(Folder.temp + "/tempText.txt");

// Check if we're in the "idle" phase - waiting for the external
// program to finish
if (theItem.eventCode == "idle")
// We check whether InDesign is still modal locked. If so, then
// the external program has not finished yet - bail out of the event
// handler. In a second or so, on the next idle event, we'll give
// it another go
var indesignModalLocked = app.callExtension(0x90b6C,10003);
if (indesignModalLocked)

// The modal lock is gone - so the external program is finished.
// We restore the event filter to what it was before it all started
theItem.eventFilter = "subjectModified-recomposed-overset";

// Did the external program communicate some data back to us?
// If so, then it is in the temporary file
if (! tempFile.exists)

// Read the edited text and stuff it back into the story being edited
// We've stored a reference to the story in the data store associated to
// theItem
var editedStory = theItem.getDataStore("editedStory");
editedStory.contents =;

// And we're done for now!

// Ok, we're handling a subjectModified-recomposed-overset event here.
var theDocument = GetParentDocument(theItem);

// We need the document's path to find the satellite app. If the document
// has not been saved yet, there is no path - so bail out
if (! theDocument.saved)

// Is this a Mac or a PC? The Mac uses .app files, the PC uses .exe
var isMac = $.os.charAt(0) == "M";
if (isMac)
var theSatelliteApp = File(theDocument.fullName.parent + "/");
var theSatelliteApp = File(theDocument.fullName.parent + "/ExampleInDesignSatellite.exe");

// If we cannot find the satellite app, bail out
if (! theSatelliteApp.exists)

// Write the overset story to a temporary text file
var theStory = theItem.eventSource.parentStory;"w");

// If we cannot find the temp file we just created, bail out
if (! tempFile.exists)

// Open the temp file with the satellite app, and use flag "4"
// which means: lock InDesign into a user-modal mode until
// the satellite app terminates

// Change the event filter to process idle events - so we
// can regularly check whether the satellite app has
// terminated or not
theItem.eventFilter = "idle";

// We need to remember the story so we can put the edited
// text somewhere later on
while (false);

// End of event handler. Utility functions below

function GetParentDocument(pageItem)
var document = null;
var err;
document = pageItem.parent;
document = null;

if (document == null)

if (document instanceof Document)

if (document == pageItem)
document = null;

pageItem = document;
while (true);

return document;

Wednesday, August 22, 2007

Going overseas

The next podcast will be a bit delayed - I am travelling to Europe to run some developer trainings, and I have not had the time to get my next podcast done. Stay tuned!

Tuesday, August 7, 2007

Lightning Brain Podcast: Click here to listen to "Writing code like a story"

Book mentioned in podcast:

Code Craft
The practice of writing excellent code
by Pete Goodliffe
ISBN 1-59327-119-0

Rough transcript of podcast:

Writing code like a story

Hi, my name is Kris Coppieters from Rorohiko. This is the third Lightning Brain podcase - writing code like a story.

I've been programming for over 30 years now - and my programming style has changed over time, to suit new languages, to suit new ideas, to suit programming styles.

If I look back at code I wrote, say, ten years ago, the changes are not all that great when compared to code I wrote earlier; I seem to have settled into a number of habits that work well, and have not felt much need to change.

Many of the things I do also seem to coincide with the advice given in many books about coding style - most of it is common sense.

I want to make it clear that some of my preferences are just that - preferences; sometimes, as a coder, you get 'locked in' to a certain approach. There are often many other approaches that are at least as valid, but there is no clear benefit to switch between them, so you stay 'locked' into a particular way of doing things.

When I was younger, I would let myself be enticed into endless debates - things like where to put the braces {} and whether to use tabs or spaces, that kind of stuff.

Through experience, I've learned that those things are highly irrelevant. I have found that consistency is more important rather than what your are consistent in.

So, if you like the braces one way or another, you won't get any argument from me. But you will get an argument if your braces are sprinkled one way here, another way there. To me that's a bit the same as having to read a book where the font size jumps up and down at random: it makes reading the stuff harder for no reason.

I often have to work on other people's code - and currently, my approach is to follow whatever consistent coding convention is used in each individual source file. I have no trouble with a project that consists of source files created by individuals who all had different ideas on how to structure their code - as long as there is a consistency to it within each source code file.

I know of developers that cannot stand working on source code that is not exactly structured 'their' way. I consider such inability a major disadvantage: these people tend to waste large amounts of time on 'restructuring' some perfectly consistent piece of code.

I strive to write code with as few comments as possible. Yes - you heard that right. I think comments are a last-resort type of thing.

I try to write the code first and foremost in such a way that it is as clear and as easy to understand as possible; I think this might be more of an art than a technique.

I only use comments if I cannot make everything crystal clear by means of clean code. You do need as little comments as possible, but no less. Gratuitous comments that simply re-state what the code does are a no-no - things like:

// Increment i

// This is the constructor

make me cringe.

Comments are dangerous - they have a tendency to get outdated as the code around them evolves, and instead of being helpful, they often become a liability. So it is important to try hard to make the code so clear it does not need comments.

Instead, I restructure and rewrite my code until it is as close to self-explanatory as possible.

My priorities are always to write readable and understandable code first, and efficient code second. When someone else (often me) needs to pick up the code in a few months or years, the most important goal is to make it easy on the developer to grasp the meaning of the code. Not doing so is inviting bugs to be introduced during maintenance by a developer who only half-grasps or half-remembers what is going on.

Often, if the code cannot be made self-explanatory, that is a symptom that something is fundamentally wrong with the approach used, and you need to take a step back and rethink things.

Writing good code is almost the same as writing a good book or a good story: it has good content, is easy to understand, is consistent in as many respects as possible, is nicely laid out. Code also needs those traits.

I also pay a lot of attention to the code layout and neatness. Small example: I will often order all of the functions, procedures and declarations and alike in alphabetical order.

I know that many IDEs make it easy to 'jump' around to functions in a source code file - but sometimes I find myself separated from my finely honed development environment, working in an unfamiliar debugger, or staring at my source code in printed form, or using WordPad to view the code. Having everything alphabetical makes it easy to guess which direction to scroll to find something in the source code.

Just recently I also figured out a way to describe how I like to pick names for things.

One basic rule of thumb is: short scope allows short names. For example, if I need an index variable, and it will be needed for just two or three lines, I am perfectly happy to call it 'idx' or even 'i'. If the scope gets larger, and spans a few tens of lines, I will make the index name more descriptive - for example 'spreadIdx' or 'pageIdx'. If there are similar variables around, I find what sets them apart and name them accordingly, making sure they are not easily confused - unless in rare circumstances, I'll never use variables like 'idx' and 'idx2'.

Names that have large scope (e.g. span multiple source files) are often longer, and often tell a little story (but again - not too long; it's easy to go overboard).

When creating names, I also try to be consistent in how a name gets formed, and I will often build name that first states the more general and then the more specific.

Similar names have a similar 'lead-in', so in case they get ordered alphabetically (e.g. in a debugger or some IDEs), similar things end up 'close to' other similar things. For example, I'll use

const kFileName_Template = "bla.indt";
const kFileName_MainDocument = "yaya.ind";

instead of

const kTemplate_FileName = "bla.indt";
const kMainDocument_FileName = "yaya.ind";

Till next time!

Wednesday, August 1, 2007

Lightning Brain Podcast: Click here to listen to "Fun with Drop Shadows in InDesign"

Example Files:

Click here to download sample script and sample document

Example result:

Rough Transcript of Podcast:

Hi, my name is Kris Coppieters from Rorohiko, and this is my second 'Lighting Brain' podcast - about having fun with drop shadows in InDesign.

Initially, I'll try to create a number of podcasts in fairly quick succession, as to build up some content on the Rorohiko blog, but in the long run I aim to release a new podcast about once every two weeks.

This episode I wanted to talk a bit about some fun experiments I did with drop shadows in InDesign. My brain seems to be a magnet for ideas - which would be fine, if only all ideas that pop up would be useful. Sadly enough, some of my ideas are rather silly, and this podcast is based on one of the silly ones.

The podcast will also delve a little bit into some mathematical aspects of my experiments. However, even if you are not mathematically inclined you should still be able to have fun with the example document and example script - so even if the mathematics gives you the blue shivers, don't worry! Just download the sample files and have fun!

My idea was this: assume there are a number of page items scattered over a page or spread. Why not coordinate all these drop shadows so they cause some visual effect to occur? Standard drop shadows are rather dull. They simulate what would happen if there was a light source at infinite distance throwing light on a page item floating at a particular height above the page.

What I was wondering about was: what would happen if the simulated light source was not at infinite distance, but instead somewhere else - for example, located on the viewer's head, close to the paper, like a head-torch, or if the light source was simulating one of those swiveling desk lights.

The other thing I wondered about was: what if the page items that are throwing drop shadows would be pretending to float at smoothly varying distances from the page - for example: page items near the middle of the page would float higher, and page items near the edges would float lower above the page.

So, I set out to do some experiments using InDesign and ExtendScript, and it turns out the results are interesting - interesting enough to share as a podcast.

I started with the following assumptions and limitations:

All distances are expressed in points

Pages and spreads have a coordinate system: an X-axis pointing to the right, a Y-axis pointing down. The origin of the coordinate system can be pretty much anywhere, but often it is somewhere in the upper left hand corner of the page or spread. In the script, I rather arbitrarily use X-Y coordinates relative to an origin that sits in the middle of each page.

I introduce a third axis - a Z-axis which is assumed to point out of the page towards the viewer. A page item that floats above the page will have a positive Z-coordinate. A page item that does not float but sits on the page has a Z-coordinate equal to zero. Because pages are considered to be opaque, negative Z-coordinates don't make much sense in this model, as they would simulate page items behind the page.

I assume there is a simulated light source hovering at some point above the page. The point has some X and Y coordinates (which express the point on the page above which the light source is located) and a Z coordinate (which expresses how high above the page the light source is hovering).

As far as the page items go: in addition to their X- and Y-coordinate data, selected page items are assumed to also have a Z-coordinate which shows how high above the page these page items are supposed to be floating (so they can cast a shadow).

I simplified things by representing each page item by a single point - the page item's center, which is easily calculated as the mean values of the bounding box X- and Y-coordinates.

To get an interesting Z-coordinate I used a mathematical formula that takes these X and Y coordinates and returns a Z coordinate.

One of the formulas in the example scripts gives a Z coordinate that has its highest values for page items in the middle of the page, and gets lower for page items near the borders of the page (for the mathematically inclined: an elliptic paraboloid).

To calculate the parameters of the drop shadow of each individual page item, I went back to some of my high-school geometry formulas. Using the (X,Y,Z) coordinates of the light source and the (X,Y,Z) coordinates of each page item's center, I calculated the amount of X- and Y shift to apply to the drop shadows.

To work the magic, I also assumed that the script would only affect page items that are located on a page layer - I called the layer "magic carpet" (all lowercase, one space). Only page items on this layer are affected by my example script.

Finally, to get some visuals, I decided to first create a page filled with a regular pattern of square page items. The script will of course work with any kind and any amount of page items on the "magic carpet" layer, but I expected the best visual effect with a regular spaced grid of page items.

I created a new document, created a layer "magic carpet". Then I used two step-and-repeat operations to sprinkle a host of small, colored squares all over the page (about 6 squares horizontally and 11 squares vertically, with a good gap between them). You first create a single square, and use step-and-repeat with a horizontal displacement of..., then select the row of squares, and do a second step-and-repeat with a vertical displacement of ...)

The I ran my little script - and suddenly I was presented with quite a nice 3-D effect.

The script and example documents I used are available for download from this blog (


On InDesign CS or CS2, you install the file 'ShadowDance.js' into the Presets - Scripts subfolder of your InDesign application folder. The script uses .js as its file name extension instead of .jsx - that way the same script works on CS as well as CS2 and CS3.

On InDesign CS3, you install the file 'ShadowDance.js' into one of the script folders - I installed it in Scripts - Scripts Panel.

Launch InDesign and create a new document.

Create a new layer called 'magic carpet'.

On this layer, create a single square, about 60x60 points in size (about 20 mm x 20 mm); position it in the top left hand corner.

Fill the square with a color.

Use Edit - Step and Repeat... to create five or six duplicates of the square, with a horizontal offset slightly larger than the side of the square (e.g. 72 pt or 25 mm), and a vertical offset of zero.

Select the whole row of squares, and create 9 or 10 duplicates of the row of squares, this time with a horizontal offset of zero and a vertical offset slightly larger than the side of the square.

You should now have a page that is covered with a regular grid of square page items, all of which sit on the 'magic carpet' layer.

Bring up your scripts pallette (Window - Scripting - Scripts in CS, Window - Automation - Scripts in CS2 and CS3. In CS3, look under 'Application' if you installed the script into the Scripts - Scripts Panel folder).

Double-click the script 'ShadowDance.js'. You should get a result that looks similar to what you find in the example document 'ShadowDanceCS.indd'.

Till next time!

Monday, July 30, 2007

Lightning Brain Podcast: Click here to listen to "Getting Started with the Adobe InDesign SDK"

Opinions, ideas and other musings about software development for printing and prepress...

Recommended book list:

| Added remark, 15-Jul-2008: I have since written my own book
| about getting started:

Effective C++
by Scott Myers
ISBN 0-201-92488-9

More Effective C++
by Scott Myers
ISBN 0-201-63371-X

STL Tutorial and Reference Guide
by Musser, Derge & Saini
ISBN 0-201-37923-6

Effective STL
by Scott Myers
ISBN 0-201-74962-9

Design Patterns – Elements of Reusable Object-Oriented Software
by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides
ISBN 0-201-63361-2

The C++ Programming Language (Special 3rd Edition)
by Bjarne Stroustrup
ISBN 0-201-70073-5



Adobe InDesign Developer Centre:

Rough podcast transcript:

InDesign can be approached in two ways - either you can use a high-level approach, using one of supported script languages, or you can use a low-level approach, using C++.

The high-level approach is often sufficient for automating various repetitive tasks; we'll discuss this high-level approach in more detail in some future podcasts. At Rorohiko, for our own sofware development, we always try to achieve as much as possible using scripting, because development-wise scripting is so much more cost-efficient when compared to using the low-level approach and the SDK.

The low-level approach becomes necessary when it comes to really tight integration, for example, getting invoived in the various drawing processes, or providing new user-interface elements. Then there is no other alternative but to use the InDesign SDK and C++ to achieve the desired results.

The main aim of this particular podcast is to offer some insights into the low-level approach using C++.

If you are a software developer, you might already have experience with a number of environments and languages - QuarkXPress XTension development, C, C++, Java, JavaScript,... and expect to ease into InDesign development with roughly the same amount of effort as you have needed when you started using these other environments and languages.

The InDesign SDK is probably very different to anything you've ever handled before.

The actual InDesign SDK is not very difficult. Granted, it is very extensive, but the concepts behind it are not much different from those in other, similar SDKs.

The main difference is that the InDesign SDK is built on top of a fair number of other methodologies and concept, all fairly recent developments. If you try to step into the InDesign SDK without being well versed in most of these underlying foundations, nothing much will make sense.

A would-be InDesign SDK developer should keep in mind that there are no shortcuts: you must first cover the basics before trying to work with the InDesign SDK, at the risk of repeatedly losing a lot of head-scratch time trying to understand things that are actually quite simple.

It would be like trying to build and launch a satellite without first studying mathematics and physics.

So, these are the four things you need to do before you can get started:

1) You MUST have a very good grasp of C++ and various techniques. This is the first and foremost requirement. My recommendation is to at least read Scott Meyer's books (Effective C++, More Effective C++) a few times, especially if you come from a C background. I am sure even experienced C++ programmers who have not yet read these books will learn some important new things.

2) You must gave a grasp of C++ Standard Library, specifically the STL (Standard Template Library), and the boost C++ Libraries (Boost provides free peer-reviewed portable C++ source libraries).

You don't need to become an expert on these, but you need to have a good idea about what a vector is, what an iterator is, and how you use them. You also need to get a good idea of the 'mindset' behind STL and boost, and how C++ templates are used in a clever way to generate a lot of magic.

3) You must be able to read UML diagrams. UML diagrams use a number of similar, but slightly different symbols to express relationships between things. Unless you know what the symbols mean, you'll miss out on a lot of information that is packed into the UML diagrams inside the InDesign SDK documentation.

4) You must have had some exposure to the idea of 'Software Patterns' - the book 'Design Patterns' by the gang of four is highly recommended. You don't need to read this book from cover to cover, but you should at least read about the most important patterns and ideas.

Once these four requirements are fulfilled (good grasp of C++, grasp of STL and boost, understanding UML, grasp of common software patterns) you are ready to tackle InDesign SDK programming.

Started a blog

Everyone is blogging - so I decided to give that a go too. This is the Rorohiko blog. Expect various posts, rants, musings,... about development, printing and prepress, various programming languages, mathematics, photography,... The next few posts will just be some testing material - trying out getting a podcast going, that kind of stuff.