Sunday, March 11, 2012

Fingal Library Service add eBooks to their range of offerings

I was excited to see the new eBook lending service from Fingal Libraries service so I thought I would write a short review. I am a big reader and am normally in the middle of reading at least 2 or 3 books in a variety of different genres at any one time. Although I buy quite a few books, my reading habit would be unaffordable if I had to buy all of the books I read - therefore I am a big user of the Fingal Library service (in fact I am the long serving mayor of Blanchardstown Library on Foursquare).

The library is really great service and amazingly it is free. People who complain about the level of taxes we pay should at least ensure that they take advantage of the services that these taxes fund.  As well as providing a large selection of books available for loan, they also have a large selection of DVDs, Music CDs and audio-books to choose from. They even provide a facility to borrow paintings for a period of up to 3 months (although I have never availed of this service myself).

Some people dismiss the library as an institution from the past which is doomed to fade away in the internet age. I disagree very strongly with this view and I feel that the concept of a public lending library is an important one that we ought not to forget in the debate about the balance between the rights of creators and consumers of content. It is important that we maintain public lending libraries if we want to ensure that everyone in society has an affordable opportunity to access the educational materials they need. In fact the free internet access in the Library can be a vital service to people who might otherwise have  access.

Although I am a big fan of technology, many people are surprised to find that I have not yet embraced eBooks. I thought I would use an Android tablet that I received as a present for reading eBooks. Although I initially read a few eBooks on the device, I found myself in the situation where I could easily gain free access to paper books, but I had to pay substantial costs to have the slightly less convenient experience of reading the electronic version  those books.


Therefore I was very excited to learn a few weeks ago that Fingal library had added eBooks to their range of offerings. To access the eBook lending service you go to their eBook portal and sign in with your library card number and PIN number (this is the same PIN number you use to log into the regular library web page). If you don't have a PIN number then you can get one by calling in to your local library.

When you start using this service you could be initially a bit confused because this is not a web site run by Fingal Library service as such, instead you are accessing a global web site OverDrive but the cost of your access is being covered by Fingal county council. I am not sure exactly how the financial details work, but luckily I don't need to. I suspect that the books on offer to me are a selection bought by Fingal library service rather than a global selection because there seems to be an unusually large number of books with themes of Irish interest. The service is quite easy to use and any problems I did encounter were easily solved with assistance provided by the operators of the Fingal Library page on Facebook.

The eBooks in the library are available in two different formats. One format is compatible with Adobe Digital Editions which is usable on your computer. The other format is compatible with the OverDrive reader application which is available for a wide variety of mobile platforms. I used the OverDrive Android version on a miScroll tablet and found the reading experience to be very pleasant. I also installed the application on my phone - while the software worked perfectly, I can't imagine I would read an entire novel on the small screen of my phone.

There is a version of the OverDrive application which works on Amazon Kindle devices, but apparently the OverDrive site has placed restrictions which stop books borrowed from libraries outside the USA from being read on the Kindle. I can't see the logic behind this unfair regional restriction, but I guess I would get more worked up about it if I actually owned a Kindle.

When you borrow a book from the library you are free to read it on any compatible device you own, but you can't simply transfer the file from one device to another (which I initially tried to do). Instead you must download the book directly from the web site by selecting the "Get Books" option from within the application. I am not sure why this is necessary, but I assume it is something to do with ensuring you are not trying to bypass the usage limitations.

Each user is allowed to borrow up to three books at a time. When you borrow a book you can choose between a borrowing period of 14 days or 21 days. If you choose the OverDrive format you can return the book as soon as you are finished reading it, but with the Adobe format of the book you can't. This means that you would probably best to choose the shorter loan period because the book will be counting against your loan limit even though you have finished reading it. If one of your borrowed books has expired you can always download it again (unless someone else had borrowed it in the meantime). If you have not deleted the book from your library your stored bookmarks will be maintained.

What did I borrow:

DeciderDecider
by Dick Francis


I picked this first because I expected the content to be undemanding. I was pleasantly surprised by how enjoyable I found the  experience of reading the book, both in terms of the physical experience of reading an eBook and the fact that Dick Frances is clearly a very skilled writer. I did not know what the ending was going to be until I reached it, but I knew from experience that it was going to be a happy ending.
Breakfast with AngloBreakfast with Anglo
by Simon Kelly


The second book I chose was an account of the recent Irish property bubble as told by a property developer who was personally involved in the centre of the action. It was different from the previous book in that I knew in advance what the ending was going to be and that it was not going to be happy. Nevertheless I found it educational to see how things looked from the point of view of someone who was personally involved.


The last chapter was devoted to what lessons he learned from the experience. I know that hindsight is always 20:20 vision, but anyone considering getting involved in property speculation would be well advised to read this chapter.
If You Lived Here, I'd Know Your NameIf You Lived Here, I'd Know Your Name
News from Small-Town Alaska
by Heather Lende


The third book I chose was different again. It was an account of what it is like to live in a small isolated town in Alaska. The author worked as an obituary writer for the local paper and the stories in the book seemed to be mainly derived from the information she learned while researching these obituaries. Irish papers tend to only publish obituaries for prominent people, but it seems that in Alaska they publish obituaries for all people who die. This meant that the stories described an eclectic selection of people who lived very different lives. 

It was ironic to be reading a modern format eBook about people leading an austere life with little access to modern technology, but overall I found it enjoyable.

In short, I really like this new eBook service. I don't think I will give up reading paper books yet, but I will definitely supplement my reading materials with regular borrowing from the eBook library.

Friday, March 9, 2012

The trouble with GRUB

Most Linux systems use the Grand Unified Boot Loader (GRUB) to control their boot process. In general this is a wonderfully simple but powerful system that allows you to easily define a menu that will be presented at boot time allowing the user to choose from a range of installed Linux versions to boot (of course most people have set up a default option which gets booted after a short delay if no other choice is made).

This tool is very useful for people who want to try out Linux, but want the security of being able to easily switch back to Windows if they regret the decision. Although GRUB doesn't officially understand how to control the Windows boot process, there is a well understood trick to allow you include Windows versions in your GRUB boot menu. Since Windows can only be booted from the first partition on your hard disk, you simply need to get GRUB to make the partition containing the version of Windows you want to boot look like the first partition and then GRUB hands control to the Windows boot loader to do the rest. Most Linux installers will automatically configure GRUB for you at install time with a choice of all of the different operating systems found on your different partitions, so you don't really need to learn much about how it works under the covers.

Every time you update your version of the Linux Kernel it is necessary to tweak the configuration files to include an entry for this new kernel, Luckily, the most recent version of GRUB (which is confusingly known as Grub2 although the current version is v1.99) has a great configuration system which includes scripts to automatically rebuild updated configuration files each time you install a new Linux kernel.

With such a glowing praise for GRUB, you might wonder why I entitled this post "The trouble with GRUB". However, there is one minor, problem with the way that GRUB works which is very annoying. The GRUB2 automatic scripts for building configuration files assume that the configuration files should be contained in the /boot/grub/ directory on the partition that the current version of Linux was booted from, but at boot time GRUB might look for a menu definition file in a completely different partition. For example I currently normally run Ubuntu Oneric (v11.10) on my laptop which is installed on partition /dev/sda5, but at boot time GRUB looks for its boot menu on /dev/sda3 (where RHEL 6 is installed) - this means that I need to remember to manually copy the grub menu definitions from  /boot/grub/grub.cfg on /dev/sda5 to /boot/grub/grub.cfg on /dev/sda3 or else I will continue to use an old version of the kernel.

This must be a nuisance to many Linux users and not just me. Does anyone know of an easy way to tell grub which partition should be used for storing Partition files? Even better does anyone know how the automated configuration scripts could be updated to figure this out for themselves?

Sunday, March 4, 2012

Fantastic Dublin Science Hackday event indicates that ireland has a bright future

Crochet model of a red Blood Cell
The first ever Dublin Science Hackday was held this weekend in Dublin City University. The event was part of the Dublin City of Science 2012 celebration. This event involved teams of amateur completing challenging projects over a grueling 36 hours. I say amateurs as a compliment because although many of the participants are professionals, they completed in this event purely out of their love of science and technology.

I initially planned to take part in the event myself, but as I got closer to the event the reality of spending 36 hours of my weekend working on a hack began to scare me and I chickened out. Instead I attended the kick-off talks on the Saturday and then gave a few words of encouragement to the participants before I went back to my normal weekend activities. Luckily there was lots of status updates posted to the #dubscihack hashtag on Twitter so I could follow along with the excitement from home. I then returned to DCU on Sunday afternoon to see what had been accomplished. I found the participants very much more subdued (not surprising after 36 hours without sleep), but the projects were very impressive.

You can see all of the details of the projects completed here, of even watch the project presentations on Ustream, but the prize winners were:
  • Best use of Government Data went to the YPath project which developed an application for children to track their physical activity.
  • Most interesting Use of Data went to the Financial Market Sonification project which produced an audio stream which provides an audio stream that represents a summary of the activity in the market so that traders can have ambient awareness.
  • The Hardware Award went to the Aurora Lamp  which used LEDs to project information about the level of the Aurora Borealis activity.
  • The Design Award went to Open Stats Wiki developed a cool mobile application to allow fans to use their smartphone record statistics of a match the attended live as the action unfolds. This hack will have a bright future because there are many people who share an obsession with sport and an obsession with technology.
  • The People's Choice Award went to The Aurora Orrery project build a visualization of from where on the globe the Aurora activity can be seen.
  • Last but not least, the hBest in Show Award went to the Elements Trail project which built an augmented reality layer using Layar. to build a treasure hunt game based upon the periodic table
It is also worth giving honorable mention to:
  • Tríona (@triploidtree) completed who the CIYbio project (crochet it yourself biology) which involved  using crochet to build models of things to do with biology (to help teaching). I am not sure if it was useful, but the models were certainly cute.
  • μsic developed a cool application that mixed music listening with social networking.
  • The intermeter project projected the level of Network activity from the science hackday into a simple amp meter.
I think that Ireland has a very bright future when we have bright hackers like this. Well done to everyone who contributed to the event.

Friday, March 2, 2012

Hello! Hello! Can you hear me?

I believe that telephone based applications offer a huge benefit in terms of ease of use as compared with web based applications. However, many people don't even try to create applications with a telephone interface because they mistakenly believe that it is very hard to do. Now that we are on the eve of Dublin Science Hackday I decided it would be a good idea to tell people how easy it is to develop applications with a phone based interface by describing a simple application I developed myself.
 First, let me explain some of the background for why I developed the application. I used to work on the development of a computer telephony system. We were strong believers in the theory that it was important for the developers to get a good understanding of the end users' perspective of the system and so we encouraged all of the development team to use early builds of the system as much as possible.
Some of the Headphones I Use.
To be totally honest the experience was painful in the early days. Each time I made a call I knew there was a significant chance that the call would not be successful. Not only was there a chance that there was a bug in the latest daily build of the client which I had installed on my machine or in the server code which was also updated regularly, there was also a very significant chance that there would be some problem with the volume settings on my headset. Most headsets have hardware volume controls and/or mute options on the headset and these controls might not be set properly to interact with the volume settings on my laptop's operating system - and because I carried my headsets around in a bag with other hardware they frequently suffered physical damage.
Because of all of these potential problems I often spend the first few minutes of a telephone meeting shouting "Hello! Hello! can you hear me?". If I was speaking to another team member I could expect them to be understanding of this wasted time and/or poor audio quality while I tested several headsets to find which was working best. However, when I was making an important call to someone I wanted to impress, I needed some way to be totally confident that all aspects of my telephony setup were working correctly.
Anyone who uses Skype is probably familiar with the "echo123" virtual user. This is a virtual Skype account that anyone can call and be answered by a pleasant sounding lady who will listen to what you say and then repeat it back to you as it sounds to her. I decided to hack together something similar that could be used with any telephony system. After a bit of searching on the internet I found the voxeo developers site which offeres excellent free resources to anyone wanting to develop voice based applications. Voxeo make their money from providing commercial grade voice response systems to mission critical systems, but in order to convince people how easy it is to develop a user friendly voice interface to their system they give developers free access to their powerful web based development environment and they will even host your application on their test servers so that you can test it out in action.
Voxeo support a number of programming languages including the industry standard VoiceXML. Developing a VoiceXML server is very complex, but the good news is that since Voxeo have done that you don't have to. Developing a voiceXML application is very easy (there are excellent tutorials on the Voxeo site to get your started). I was able to develop my application in under 30 lines of easy to write/understand XML. You can get the full source code here.
The way VoiceXML works is that you specify prompts for the system to play and then you listen for the user to say something (or type a DTMF tone on their keypad). You specify in XML what should be done with the response. You can see I have only one
statement and I use the text to speech function to generate the prompt (it is also possible to record the prompts for a more natural sounding interface).

The only complex line in my code is the one that reads
record name="R_1" beep="true" dtmfterm="true" maxtime="10s" finalsilence="1s" silence="3s
Translated into English this tag means:
  • Record what you hear in a file named R_1.wav
  • If you hear a DTMF tone, stop recording
  • Listen for a maximum of 10 seconds
  • If you hear nothing give up after 3 seconds
  • If you hear something then terminate when the speaker leaves a gap of 1 second or more
The rest of the application is just instructions to play back the recording to the user (or play an error message if we didn't hear anything). It then loops back to the start so that it gives me time to adjust my headset and see if it sounds any clearer.
Obviously real world applications could get more complex and if you try to recognize  what the user is saying it can get things hilariously wrong when the caller is not a native speaker. But the general idea is not too hard to master. In any case we only need to distinguish between when we hear something so that the "filled" tag applies, or when we hear nothing and the "noinput" tag applies.
If you want to try out the application you can call +1(617)963-0648  to get the version with this source code, or if you prefer the sound of my voice you can call +1(617)500-5332 to hear a slightly modified version where the prompts use a recording of my voice.

Wednesday, February 29, 2012

The business case for Open Data

I attended a great seminar yesterday organised by the Irish Software Association on the business possibilities for open data. The room was packed and we got some very good presentations followed by an interesting panel discussion. As a result of this discussion I got a pointer to an excellent EU funded research report that there was almost always a new benefit to a government from either eliminating charges for access to public sector data or reducing the price to a cost recovery level. Similarly, a recent report from Deloitte suggests that open data will be one of the main mechanisms that will enable a recovery from our recent economic woes.

I think this is very timely that this seminar was held in the week leading up to the Dublin Science Hackday, because I am sure many of the Hacks delivered at this event will probably use some of the open public data.

Here are two of the three presentations delivered (I don't have soft copy of the slides delivered by Maurice Lynch of Nathean Technologies)

This first presentation was by Dominic Byrne and it describes some of the great initiatives being done by Fingal county council to promote open government data. I must admit I was initially skeptical when I saw the relatively limited number of feeds cataloged on the Fingal Open Data web site when it launched, but time has proven that it was the correct policy to launch with what was available and then subsequently concentrate on improving it. This is a wonderful local initiative, the only concern I have is that we must not let local initiatives like this blind us to the fact that the real power of open data is its global reach. For example, when I looked for a mobile phone application for DublinBikes, I found that there were no applications written exclusively for DublinBikes but lots of applications written for city bike rental schemes which could be configured to work with bike rental schemes in many different cities including Dublin. From looking at the web sites associated with these applications I noticed that many of them were initially developed for a specific city that the developer was familiar with, but then they soon realized that they could easily broaden the appeal of their app by making it configurable to work with similar bike rental systems elsewhere.


This second presentation was by Jonathan Raper, whose company Placr has made a great business out of exploiting Open Data in the UK. Both during his presentation and during the subsequent panel discussion, he shared some of the lessons he learned from the struggle to promote open access to public data in the UK. One interesting story he told was about what happened when the London Transport authority realized that many applications were being developed that relied upon a feed that they had accidentally made public. Their IT department initially wanted to shutdown the feed because the popularity of the application meant that too much load was being generated on their infrastructure. However, the mayor was aghast at this suggestion because he saw that there would be a huge political backlash if these popular applications suddenly stopped working. Someone pointed out that this was really an online equivalent of the ancient legal principle of "right of way".

Monday, February 27, 2012

Less than a week to go until Dublin Science Hackday

There is only a few more days left until the start of Dublin Science Hackday that its happening in Dublin City University this weekend (3rd/4th of March). This is one of the first in a series of exciting events planned for Dublin City of Science 2012 celebration. I have very good experiences of being involved in Hackday events inside IBM and so I expect that getting involved with an even wider public audience should provide even more innovation and excitement.

Initially I was planning to organise for a team comprised of IBM employees to compete in this event, but the reaction to this suggestion was not very positive. On the one hand, employees who had spend a busy week from Monday to Friday working hard on their technology projects, did not relish the prospect of spending a 36 hour stretch of their weekend working on another demanding project. Also some of the organisers felt that it might not set the right tone to have a team of experienced professionals competing against teams that are mainly comprised of enthusiastic students with little practical experience.

Instead I think I will be arranging for some IBM employees to attend the event as advisers either giving training talks on technology topics that would be useful to hackers and/or informally offering practical help to the project teams as they complete their projects.

In any case I am looking forward to an exciting weekend of technology. I saw the great projects completed at previous Hackdays held in cities such as London, San Francisco and cape Town and I know that the Dublin hackers will be just as good (if not even a little better).

Saturday, February 25, 2012

Are you going to tell me or do I have to ask?

IBM is trying to encourage employees to use social media for many forms of communication which were previously done by email. As you can imagine this is getting a mixed reaction from different employees. For example Luis Suarez is leading the way in trying to eliminate use of email almost completely, but many other employees rarely use public social media sites or even the IBM internal implementation of Lotus Connections.
Recently we had a training session where the speaker was explaining how to use the various tools and enthusiastically extolling the benefits of using them. Some of the more reluctant participants raised a few issues that they had encountered and we had a lively discussion on whether these new tools actually helped solve the problem of information overload or if they actually made the problem worse because now people are expected to read a selection or blogs and keep up with activity in various Lotus Connections communities as well as reading their email.
I did some thinking about this issue afterwards and I came to the conclusion that social media tools do not inherently solve the problem of information overload, but they do transfer control of the information flow from the sender to the receiver. This is a very significant change, but in order to take advantage of this it is  necessary for the receiver to put some thought into what information they actually want to receive and then they also need to put a little bit of work into implementing a process of receiving information which works for them.
Prior to the invention of the internet, people used to communicate via physical letters and newspapers. With letters the sender used to decide whom they would send the letter to and with newspapers the reader would decide what newspapers they wanted to read. There was a cost associated with sending a letter and  also a cost to purchase a newspaper so this acted as a natural limiting factor on the amount of information that anyone received. Once the internet was invented, email replaced letters and web sites replaced newspapers.
Since the cost of sending an email is virtually zero people have a tendency to send copies of their emails to more people than just the people who really need/want to receive the information in the email and this leads to information overload. Automated SPAM filters can deal with the really obvious abusers of the email service, but it still is not feasible for most people to read all of the emails they receive each day. If you really want to be in control of your information inflow you need to establish some procedure for deciding what emails you will read (personally I like the GTD system, but I am sure there are many alternatives).
Nobody should limit their information inflows to just the information that other people decide to send them. Therefore you ought to regularly spend some time seeking out new information. Each person will find a different system that works best for them e.g. reading the daily newspaper, reading a series of favourite blogs, logging into Facebook etc. Since everyone has different preferences there is no one system that will work for everyone, but you ought to make a conscious decision about how to go about seeking "news" because the sources of information you rely upon will have an important effect upon how successful you will be in life.
Needless to say another important factor in reducing information overload is that people need to feel empowered and confident enough to tell people "no I didn't read your email". If you do not feel personally empowered to take control of your own information inflow, no tool can help you.

Saturday, February 18, 2012

Security for stand-alone Java programs that use the Notes API


Since its launch in the early 1990s, Lotus Notes has had a wonderful security architecture which implemented a public/private key architecture which has only recently been widely adopted by other platforms. Unfortunately this meant that many people are not really familiar with how it works.

I recently blogged about how to write stand-alone Java programs which can manipulate Notes/Domino databases. A few people commented upon the fact that when they run their stand-alone programs on a machine on which they have a Notes client installed they are normally prompted to enter a password each time the program launches. Since they don't get a similar password prompt when they run their program on a machine which has a Domino server installed on it, they seem tro think that the Domino server code gives them some form of privelege to bypass Notes security. This is not true, but to explain why I will first need to explain some important facts about Notes/Domino secuirty model.
  1. Notes implements what security experts call "two factor authentication". This means that you need to prove your identity by two different mechanisms. Firstly you need to have an ID file containing your private key and secondly you need to know the password used to secure the ID file.
  2. Domino servers also have ID files to prove their identity. However, most administrators insist that their real human users choose a complex password to secure their ID file and change it frequently, but most administrators don't use any password at all on their servers' ID files. This is because otherwise they would need a human to be present to type in the password every time the server is restarted and in any case sharing the ID file's password with all of the potential administrators would negate any security benefit from having a password.
When you run a Java stand-alone program that used the Notes API on a machine which has the Notes client installed you will run with the identity of currently installed ID file (which typically requires you to type a password). When you run this same program on a machine which has the Domino server installed you will run with the identity of server's ID file (which probably doesn't require you to type a password). Because of this you will need to ensure that the server is granted the appropriate access rights to the databases your program needs to use.

Domino has the concept of "scheduled agents" which can run on a server in the background and do various maintenance tasks. If these tasks executed with the server's ID it would be necessary to give the server access rights to a lot of databases which would not be very secure. Instead Domino implements a mechanism that these scheduled agents run with the access rights of the user who signed the agent this means that each of the users can run their own version of the agent running with access rights to just their own databases.


Thursday, February 16, 2012

World Radio Day highlights how a technology with a long past can also have a bright future

Monday of  this week was the officially declared by UNESCO to be  World Radio day. I was listening to the latest episode of the BBC click podcast which was devoted to this event while out jogging and I was struck by how the audio broadcasting technology has both a long glorious history and a bright future.

Here are a few facts worth thinking about:
  • Radio was effectively the first world wide web. Modern radio stations tend to broadcast at frequencies that have a relatively short range so that neighbouring radio stations won't interfere with each other, but in the early days of radio stations tended to use Long Wave transmission which had a much longer range. In fact the BBC World Service has been broadcasting globally since 1932 by using a network of transmission stations strategically placed throughout the globe that would re-transmit the programs originally transmitted from London (an architecture which is strikingly similar to that of the Internet). Of course they were assisted in the early days by the existence of the British Empire, but this network is still apparently reaching and audience of almost 200 million listeners every day.
  • Radio stations were probably the first  users of social media. As soon as telephones became widespread, many radio programs adapted the phone-in model whereby listeners could phone the radio station to contribute to the discussion happening in the studio. In recent years the technology has been updated to use Twitter and similar tools, but the basic idea has been popular for many years.
  • Radio technology is cheap and truly ubiquitous. While many people listen to radio programs on very sophisticated and hence expensive devices, cheap radio receivers are affordable for even the poorest of people. In fact their power consumption is also so low that batter powered models can be used in remote areas where no mains electricity supply is available. In fact it is even feasible to have devices whose battery can be recharged by manually winding a handle.
  • Audio broadcasts can reach people even when they are busy. For example, many people listen to the radio while preparing breakfast in the morning and almost all cars come equipped with a radio that you can use to stop yourself getting bored on long journeys. I know that some people might read blogs while driving, but this is definitely not to be recommended for safety reasons. However, listening to the radio while driving is perfectly safe.
  • Modern distribution techniques like podcasting complement rather than compete with radio. I follow may different podcasts and I notice many (but not all) of the best shows are radio programs that are simply recorded and turned into a podcast with minimal effort. The skills that radio broadcasters have learned over the years enable them to produce a very high quality product and for minimal extra effort they can transform their existing radio content into podcasts that can reach a much wider audience that are outside the reach of their transmitters. In fact I know that many colleagues who are not natives of Ireland really enjoy the fact that the Internet allows them to easily keep in touch with home by listening to the local radio station from their home town.
  • Audio broadcasting is a field that is open to both professionals and amateurs at the same time. While I was in secondary school, I had great fun working as a part-time DJ on our local pirate radio station. The technology we used was amazingly cheap and low-tech even by the standards of the time. The production standards were not very high and were not really capable of competing with the real professionals, but we did nevertheless manage to build up a loyal group of listeners. It is no surprise that there never was a pirate television station in the west of Ireland despite the fact that there was a clear demand for an alternative to the single station that was available at the time - the costs of setting up even a very basic television station would be several orders of magnitude higher.
When you consider all of these factors it is clear that audio broadcasting is a technology which will flourish in the years ahead even if the tools and techniques we use to produce the program and/or listen to the content will continue to evolve.

Monday, February 13, 2012

[xpost] Running stand-alone java programs that read and write Notes/Domino databases

[This post was originally posted to my IBM internal blog]

Most code which interfaces with Notes/Domino databases will run in the context of a running client or server e.g. a notes agent which runs on a schedule or in response to an event. However, there are times when you might want to run a stand-alone programs that will read and write Notes/Domino databases.

Bob Balaban (who wrote the definitive book on the Notes Java API) recommends that programmers should write agents in such a way that they can be run either as a stand-alone program or as a Notes agent because of the fact that it is so much easier to debug a Java program than to debug a Notes/Domino agent (he calls this the two headed beast). However, it is also often handy to distribute a program which modifies a Notes database in the form of a stand alone program e.g. I recently had to  write a program which tweaked the settings of a Sametime Community server - since many Sametime administrators don't even realize that they have a Domino server underneath I thought it would be easier to distribute the code in the form of a stand-alone program rather than as an agent and asking administrators to go through the complexity of installing and enabling a Notes agent.

One of the things that is complex about running a piece of Notes/Domino code is that you need a lot of context e.g what Notes ID etc. If you are running java agent on a machine which has Notes installed upon it then the environment variables will be already set up for you, but if your machine does not have Notes installed it can be tricky to figure out all of the paths and environmental variables that are needed. In particular I have often wasted a lot of time trying to get Java programs to run on Linux Domino servers.

What many people don't realize is that Domino Linux servers come with a very handy startup script in /opt/ibm/lotus/notes/latest/linux. that can do all of the hard work for you. To use this program you should invoke your program with the script, like:
/opt/ibm/lotus/notes/latest/linux/startup /LOCATION/MYTOOL
where LOCATION is the path to your tool and MYTOOL is the name of your tool. Then your tool will get invoked with all of the necessary environmental variables defined properly.

This startup script  is really handy, I am not sure why it is not more widely documented.