Results tagged “Computer/Programming”

Two weeks ago, a Friday afternoon, I received an emergency call from a friend. Her sister had stored all of her University work on an external harddrive. The rest of the story should be quite predictable from this moment on. Of course the data was stored nowhere else (this drive SHOULD have been only a backup drive but grew into the main working device) and suddenly one second to another the drive was not accessible anymore. Furthermore the drive emitted a clicking sound every few seconds. Several tries to get the drive running again failed so far. My friend already checked the most obvious causes like dirty contacts or loose connections and had no success so they gave me a call as they already apprehended a headcrash.

When the drive arrived in my hands I had the following situation: an external, almost brand-new, flat-lying 1TB harddisk exposing USB and eSATA connectors. To lower the further damage during uptime a bit I immediately fixed it in a vertical position using a stand from another external disk I had lying around. In my imagination this should minimize the bouncing of particles some more (because some of them then collect on the lower side of the encasing) if the surface of the disks had indeed taken some damage and slow down the degradation to some degree. A very quick test from within Windows showed that the drive tried to register with the system but failed to do so. So no more tinkering here but quickly start up a Linux system for recovery.

I started up SystemRescueCd which I had installed on an USB stick (using SARDU)for situations like those. Connecting the drive via eSATA failed because the drive didn't show up in /dev/ so I had to fall back to the slower USB connection for all following steps. Connecting with USB took some time (about ~30-60s) until the device showed up in /dev but then it was relatively accessible. First thing I checked was the SMART info using smartctl -a /dev/sdd where it became pretty obvious that the drive is badly damaged. About 100 relocated sectors and a handful of pending relocations. Very strong signs for a headcrash indeed, so no time to waste and get as much data from the disk as possible.

Trying to mount the disk failed so I could not just copy the files down but had to make a complete image at first to work with that later on without the failing drive. At this moment another problem struck as I had nothing around where I could store a 1TB image file. At maximum I could free up 600GiB on a Linux drive.

I had to make another call to find out that there should be an NTFS filesystem on it with about 200GiB of data stored on it. The drive should be relatively new and there has been not a lot of activity beyond storing and some updating of the files. So I hoped for a lot of uninitialized areas which would be easily compressible. A quick check with hexedit /dev/sdd confirmed my speculation, there were large zeroed-out areas at the end of the disk. This confirmation took a while because I seemed to hit erronous areas already at the beginning of the disk where the tool stalled until the read-error timeout snapped it out.

The Linux filesystem ext3 has support for sparse files which automatically compresses unused/zeroed areas of a file so I had the hope that the 1TB image-file would still fit on my 600GiB free space.

A simple copy of /dev/sdd (with cp or dd) would fail because of the errors on the disk, luckily there are tools available which save the working areas and try to recover the failing areas. I chose ddrescue for this job because it has a buildin switch for creating sparse target images, which saved me from manually creating one. I somewhat sticked to the instructions from the Forensics wiki and made a first pass over the disk without retrying failing sectors to save as much of the intact data as possible. ddrescue -d -S -n /dev/sdd disksddsparse logfile

This first run took quite some hours because transferring 1TB over USB at 30MB/sec (at best, almost zero when hitting defect sectors). Because of the logfile (the last parameter) I was able to interrupt the process overnight as I didn't want to let it run unattended for too long. During the copy from time to time I checked the SMART infos in a second terminal which showed me that either the disk was dedgrading by the minute or the disk logic was just counting currently undetected errors. But the further the initial rescue was running, the larger were the intervals between the errors which raised my hopes. In the end the first run ended with the full 1TB image stored on my disk (which took only ~250GiB because of the Sparse option), having about 130MiB of errors scattered across ~1100 locations. Not that bad, but there was surely some more to gain, so on to the second run.

In this second run I started ddrescue in a way where it looks closer to the erronous spots on the disk and tries to approximate to the exact location of the error within the whole error area to get out all bytes which are not really affected. These actions are called splitting and trimming of the defects. ddrescue -d -S /dev/sdd disksddsparse logfile

This repair-run finished faster because it only checked the errors, nevertheless it still took some hours. It was quite successful as it lowered the number of error-locations to ~904 and the affected data area to 512kiB. Wow. I wonder if there's more to squeeze out. Let's retry the errors and automatically retry without retry-limit ddrescue -d -S --retrim --max-retries=-1 /dev/sdd disksddsparse logfile

Again I let this run for some hours and when it seemed to only have minimal success anymore (about around the 5th automatic retry) it was down to 859 errors summing up to 490kiB of errors. So, finally the outcome of the rescue operation looked quite promising. Just for the curious ones, the smartctl statistics were far beyond good and evil with about 900 relocated sectors and 1300 pending. And big fat letters telling me "FAILING NOW"...

The last step now was to mount the partition within the disk-image. I found out the offset for the partition mount by comparing the outputs of the following hexedits and finding the second one in the first one (luckily Linux could detect the partition itself). hexedit /dev/sdd hexedit /dev/sdd1

If this weren't possible I would have calculated the partition offset using one of the guides on the internet here (German) or here. After that I coult mount the partition using... mount disksddsparse /mnt/image -o ro,loop,offset=0x7e00

... and began to copy the files out of the partition. There were some filename encoding issues and warnings during the copy which were finally resolved by mounting with a manually enforced charset. mount disksddsparse /mnt/image -o ro,loop,offset=0x7e00,iocharset=utf8

Well, that's the story of a saved academic career (at least a gigantic pile of work). I hope that my experiences maybe help someone other with rescuing data from a failing disk. Now I just have to decide what gift to take in exchange for this rescue operation... ;)


And another month went by without much news from me. There haven't been many changes anyway. University is stressful as always, the amount of work at my employer keeps at a high level and there is only little spare time for relaxing.

Furthermore over the last week, my laptop computer has been suffering a slow death wich I tried to prevent by repairing the failing operating system several times. Eventually I had to give up when the point came where the installation of some Windows updates rendered several subsystems (networking, graphics, ...) unusable.

I took the bitter pill and bought a new hardisk and installed a new OS from scratch. Currently I'm in the process of transferring all of my data from the old Hdd to the new Win7 installation. This will still take a few more days until I'm again at the point where I can effectively perform real work again but I'm slowly approaching that point.

And what's slowing this down too is the fact, that I'm sick since Friday and in bed since Saturday. Happy Easter, ha ha :/


Woke up, checked the computer, repartitioning finished. Total time: a bit over 10 hours.



This late afternoon I decided to do a bit of re-partitioning on my large external harddisk and make some more room on the FAT32-partition to make the data on it (mostly recorded broadcast videos) available for my LCD TV which only accepts FAT32 partitions.

Everything went well in the beginning, the large 800GB partition was shrinked within two hours (yes, it takes a bit over USB) and the moving of the following partitions took only a few minutes more. But the enlargement of the FAT32 partition from 100 to 300GB is now already taking about six hours and I am getting a bit nervous...

I'll let it run overnight now and hope that it'll be finished in the morning. But I don't know how I should proceed, if it's still busy when I get up.


Yesterday I spent a few hours trying to develop a minimalistic application for my new android phone. With the help of a quick Hello World tutorial I got up and running quite quickly. Within a few minutes I had Hello World showing up in the Android phone emulator.

When I tried to start it on my real device, I had the problem, that my phone was not showing up as connected device when I plugged it in via USB but only its internal memory was available as drive. After some investigation and research I found out, that I plugged in my phone before I activated the USB debugging mode on the phone which causes some driver irritation on Windows. This thread explains the details and how to solve the issue.

The next thing I did was to find out how to load the contacts stored on my SIM card. This is not a very well documented task so here is the required code to use Content Providers for loading contacts from the SIM module. This method returns the contacts as ArrayList of Strings which consist of the three fields name, number and _id.

private ArrayList<String> retrieveSIMContacts() {
    Uri uri15 = Uri.parse("content://sim/adn/");

    ContentResolver resolver = getContentResolver();
    Cursor results = resolver.query(uri15, null, null, null, null);

    // Android 1.6 has a different URI
    if(null == results) {
        Uri uri16 = Uri.parse("content://icc/adn/");
        results = resolver.query(uri16, null, null, null, null);

    final ArrayList<String> simContacts = new ArrayList<String>();
    final int nameIndex = results.getColumnIndex("name");
    final int numberIndex = results.getColumnIndex("number");
    final int idIndex = results.getColumnIndex("_id");

    while (results.moveToNext()) {
        final StringBuilder builder = new StringBuilder();
        builder.append(results.getString(nameIndex)).append(" : ");
        builder.append(results.getString(numberIndex)).append(" : ");

    return simContacts;

All in all I got a small app running on my real phone showing the contacts from both the phone itself and the SIM card in two listviews selectable via tabs within two or three hours.

Quite fast, I'm impressed. I expected that Android development would be a bit more complicated...

Update 2010-01-04: Update for Android 1.6


A recent blog entry posted in Argonauts blog deals with a C# codepiece which is valid at the first look and even compiles cleanly but failes gracefully with a runtime exception when executed.

Argo shows the C# code for his example but (as he also mentions) the same code can be used in a Java example. The following class hirarchy...

class BaseType {

class SubTypeA : BaseType {

class SubTypeA : BaseType {

looks innocent so far. But if it is used the following way

public static void main(String[] args) {
  BaseType[] baseArray = new SubTypeA[3];

  baseArray[0] = new BaseType();
  baseArray[1] = new SubTypeA();
  baseArray[2] = new SubTypeB();

things get interesting. The code compiles without any problems (as in C#) but when executed, one is faced with an java.lang.ArrayStoreException. The cause for this is burried in the Java Language Specification 4.6 (thanks for the hint in the posts comments which saved me some searching). Upon compilation the type of the array is BaseType and after that it gets the SubTypeA-array assigned. The compiler does not know at this position that it would have to re-type the array it is assigning to to avoid the problems lying ahead. That's why arrays are also checking their assigned objects at runtime and cause this exception if something invalid occours (as specified in JLS 4.7).

I think that this problem would be solvable in Java, C# and most other languages which treat arrays the same way. But I also think that this would open a pandoras box ful of problems and complexity arising from this new constraint just for this specific issue. And personally I think this issue is not a common one and appears just in border-cases so that most developers can deal with it for now. Nevertheless I also think, this issue can be solved, it just may be too late for Java and C#.


I'm running Firefox on all my computers. And I have a habbit of collecting lots of tabs for later reading or further actions. Furthermore, a lot of the pages I'm visiting have at least a little bit of JavaScript running. Some of those use scripting quite extensively.

So it was no surprise that over time the oldest of my computers at work, which I use for reading email, was getting slower and slower with Firefox using up to 0.5 GB of the available RAM. Which got paged out to the harddisk a lot, when I also switched back and forth to Outlook and some other applications. It became quite inconvenient to use.

I decided to take action and try the speed- and memory-optimized Firefox-Builds from pigfoot (via Lifehacker). Installation consisted of unpacking the downloaded package (available as self-extracting archive or also as portable version) and copying over my Firefox-Installation (after making a backup of course).

Upon startup the first thing I recognized was the new application icon and startup screen. And it felt just a bit faster, almost unnoticeable. In all other manners it behaved exactly like the original Firefox. But now after having it runnig continuously for some days, which would have made the old Firefox crawling like a disabled snail, it is still running fluently and reacting a lot faster than I expected it.

I'm very satisfied with this build of Firefox and hope, that some of the optimizations somehow make it into the original build. I think many power-users like me would only appreciate that.

One drawback should not stay unmentioned: I do not know, if this version follows the automatic updates from Mozilla. Nevertheless, new versions have to be installed manually after these are available at pigfoot's weblog. But for the improvements I'm experiencing on my stressed computer I'm happy to live with that.


Recently, after working some time without any problems, my Cisco VPN Client started again to cause bluescreens when connecting to a VPN server. This hiccup was already present when my mobile computer was brand new and I tried a lot of stuff but in the end the problem went away when I had to re-setup my computer for another reason.

But now it had reappeared. The only thing I could think of which could have caused this was that I had installed a lot of updates for my HP mobile workstation and a lot of Windows updates.

I then started again into the world wide web to find a solution while hoping that someone found a solution to my problem in the meantime. And indeed, this posting solved my problem. It works by disabling the (hidden) vsdatant device in the device manager.

  1. open the device manager
  2. select devices by connection from the view menu
  3. select show hidden devices, again from the view menu
  4. now in the devices list scroll down to the bottom until you find the vsdatant entry
  5. right-click and disable the device
  6. you'll be presented with a reboot-request, which you should accept before trying to connect to a VPN now

I do not know exactly what this vsdatant device is for but from hints in the internet it seems to be connected to the TrueVector (or stateful) firewall, which is included in the Cisco VPN Client. As I do not need this (or know someone who does) and furthermore found no articles in the net indicating further problems with a deactivated vsdatant I consider the case solved for me now.


The past two weeks I've working on an exercise for the university. The task was to create Enterprise Java Beans (EJB) and Web Services (WS) which were then deployed to and run on the JBoss Application Server. We were using Eclipse Ganymede EE, Java JDK6 and JBoss version 5.0.1 GA for our purposes.

My colleagues and I had quite some troubles getting it all to work but eventually we managed it because most of the issues are documented and their resolutions are available on several sites somewhere on the net. Only this particular issue seemed to be covered nowhere or if it was, the solutions suggested did not work for us. I'm giving you the solution that worked for us here, maybe it helps somebody who runs into the same issues we've experienced.

Many people seem to succeed with the Instructions for using JBoss5 on Java6 from the JBoss Getting Started Guide, but for whatever reason this did not work for us. If you have the same problem persisting even after copying around the library-files, read on.

I hope the solution present here can also help you with your problem. If you have anything to add, find a mistake or can give any other feedback, please leave me a comment.

The issue

JBoss is up and running without trouble, having webservices deployed and visible on the webservice-endpoint overview at http://localhost:8080/jbossws/services. But upon calling one of the methods provided the JBoss chokes with the exception:

ERROR [SOAPFaultHelperJAXWS] SOAP request exception
java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage
    at javax.xml.soap.SOAPMessage.setProperty(
(and some more exceptions with "setProperty must be overriden..." appearing several times)

and the webservice call fails with no result.

Quickly explained solution

  1. install JDK5 and add it to Eclipses installed Runtime Environments
  2. set JBoss execution environment to use JRE5
  3. set JRE in webservice-projects buildpath to JRE5
  4. set Project Facet in webservice-project for Java to 5.0

Detailed solution

  1. download JDK5 from Sun
  2. install JDK5 (only JDK needed, no seperate JRE, docs, etc.)
  3. add JDK5 to Eclipse as Installed JRE (Window->Preferences->Java->Installed JREs->Search...)
  4. change startup configuration of JBoss to JDK5 (doublecklick on JBoss-entry in Servers-tab->General Information->Open launch configuration->JRE->Alternate JRE)
  5. change the projects JRE System Library to use JDK5. Either via right-click on the system library entry in the project->Properties or right-click on the project->Build Path->Configure Build Path->select "JRE System Library"->Edit. Then set the Alternate JRE to JDK5
  6. change the projects Java facet to JDK5 (right click on project->Properties->Project Facets->Change the setting next to the "Java" entry to "5.0"
  7. check, that in %JBOSS_PATH%/server/default/deploy (or your own configured deploy-path in JBoss) there are no .war-files left from your current project
  8. if not already suggested by Eclipse itself, rebuild the webservice project (if building automatically just clean the project via Project->Clean...)
  9. start JBoss via Eclipse
  10. check that somewhere near the beginning of the JBoss console log something like "Java Version: Sun Microsystems Inc" or "Java HotSpot(TM) Client VM" appears as "INFO [ServerInfo]" log line

If you are receiving errors like "java.lang.UnsupportedClassVersionError: Bad version number in .class file" then there is still something connected with JDK6 left in your project. Check the steps again and also your included libraries if there is something suspicious and then rebuild the project.

After all these steps you should finally be able to call web service methods on your endpoint without causing those exception anymore.

Issue background

The technical details and origins of this error are explained in JBWS-2649 along with an initial solution. In short JRE/JDK6 includes a dummy-implementation for this setProperty() method which overrides the required implementation which is supplied with JBoss. The mentioned initial solution solves the problem by copying the supplied libraries to a location in the classpath where it should be loaded before the JRE libraries but as already mentioned this did not work for most of our class.

| | Comments (1)

For one of my lectures at the university we had to process a project. I chose a software project with a topic I suggested myself and possibly could also later use at work.

Meanwhile my project has grown to a not-so-bad framework and I'm really thinking of opening it to the public after it has been completed at the university.

I already checked it with the university I just have to talk to some people at work on that issue.

So expect a new java project on in the next weeks.


As you have noticed it's already the time of the semester where all the homework, projects and preparations for exams accumulate towards the same date.

The upcoming weekend is again the most pressing date of this semester as this is one of the last presence lectures for some of our lectures. And the last presence lecture is often used for end-term exams.

Furthermore there are some presentations and handins to be finished by Saturday.

What's currently taking most of my time is my project work for this semester. I don't know if I've already hinted somewhere in a post on that but let me give you a short intro.

I'm creating a framework for Java which allows developers to define the structure of any file in some sort of description language, hand this and a file containing data in this format to a parser which is created out of the description language and get out a data structure which contains the contents of the file in an easy accessible way for the developer.

Initially the project was about researching such frameworks, comparing them to each other and create a prototype implementation for an application within our company. After I have handed in the description and plan for my project work I found out that there is no framework existing for the Java language which comes even near the functionality I've been looking for. Nothing. Nada. I just found something similar to that implemented for Python, it's called Construct and resembles quite exactly what I've been looking for. So I decided to resurrect my old lexer and parser know-how and create a framework on my own. Since I've already worked on a thesis in the past which involved creation of a C++ parser I knew that finding the right tools to use and creating a language from scratch is not something one can pull off without a great deal of theoretical background in that area. I already knew which problems I probably will be facing and how to avoid several caveats and until now it turns out that I'm not very far off of my expectations.

Just that I need some more time to get it finished :P

Nevertheless, what I've also been thinking of is that maybe I'll open source this framework if there aren't obstacles for that like copyright or usage issues with the university or my employer for which I'm implementing this.

Maybe there will be updates on this in the future... maybe.


Yesterday I read an article about object oriented languages (don't remember which one, seems it wasn't so important) when I recognized a small note on that article which mentioned a programming language I haven't heard of for quite some years.


Profan� was the first programming language I encountered which supported GUI elements, about 13 years ago. I used it to program basic stuff (don't remember anymore, what exactly) on Windows 3.11. Before that I've only been programming BASIC and Assembler on my C64.
When I read about it it gave me a quite nostalgic feeling and some nice memories on my first steps in the creational world of IT and I'm happy to see, that one of my "childhood toys" is still existing today :)


Just as a note of that I'm still alive and relatively well, I'd like to notice that I've decided to buy a new laptop.
There is a special program at our university which allows students to buy mobile computers at a reduced price.

As my old laptop is now slowly beginning to fall apart, I've ordered a HP EliteBook 8530w. My variant is with the Intel Core2 Duo T9550 processor, WUXGA Display and apparently a ruggedized case.

I'm very curious when it'll arrive...


For some time now I was aware that the Question/Answer platform from Joel Spolsky,, was up and life.
But I didn't bother any further as I was quite happy with Google and the set of references and personal know-how I've build up over the time.

A few days ago I had a problem which was not solvable with my usual batch of resources and in my desperation I hopped over to StackOverflow and tried to find help there.
Within a day I got exactly the hint I needed to be able to solve my problem and continue the work.

Quite happy with this experience, I decided to give it a bit more thought and maybe also answer some questions myself if I knew a bit about it.
After some time I noticed a bar on top of the site telling me that I had earned a "Student" badge and seconds afterward a "Scholar" badge. What were these? Reading on I found out that the "Student" badge means that someone accepted/voted on my question when he thought it was an useful question. And the "Scholar" badge means that an answer from me had been accepted as the most helpful one for the questioner. And then I noticed that on top of the page there was some sort of Score, the Reputation and also a small statistic on my earned badges.

After thinking about it some more now and continuing to participate on the site I came to the conclusion that the way the questioning and answering works on is the most ingenious way to help developers I've seen so far in my life. turns the boring process of writing a question and giving an answer on forums and webpages into an exciting game, where good questions and answers are rewarded (by upvoting) and bad ones are hindered (by downvoting). And you also have a running score (the Reputation) which you can compare to others and is calculated from up/down-votings on your questions and answers and also how often your answer was considered as the best one for the one having the question. Even the factor how often your question has been found and visited counts to points and badges.

Earning a lot of different badges and improved abilities on the site (e.g. leaving comments at Rep of 50, Retag questions with Rep 500 and even edit other peoples posts with a Reputation of 2000) add to the fact that one wants to reach the goals and also has a long-term motivation to continue giving help.
And this means qualitatively high questions and answers as this earns more points faster and consistently on the long term than quick answers which only contain tiny pieces of help and get a few points for just a few hours.

I personally already have added as a quick, reliable and qualitatively high resource to my recherche-sources and will continue to build up my highscore :)


The problem

I just had a problem with the installer of 3.0. I wanted to modify my installation and add the Base module. I didn't have the installation files lying around so I had to download them again, which I did from After downloading, the setup extracted the installation files into a specified directory and started the installation (where I expected to be able to modify my local setup). This is where the first hints of errors showed up.

The setup showed me a messagebox which told me that

The same version of this product is already installed.

and completed afterward, denying me any possibility to modify my setup. Fine, I thought and went into the Windows 'Add or Remove Programs' control panel to change my setup there. At that location I was able to modify my setup for OpenOffice 3.0 and continued to the installation. I expected it to ask me for the location of the setup files as I didn't have them at the original installation directory. It asked me for the file 'openofficeorg30.msi' and I navigated to the directory where I extracted the files after the download.

The following error telling me that

The installation file '..../openofficeorg30.msi' is not a valid installation package for the product 3.0. Try to find the installation package 'openofficeorg30.msi' in a folder from which you can install 3.0.

was a bummer. It completely refused all of my attempts to install or modify my setup at latest when it required that file 'openofficeorg30.msi' which was sitting there in the extracted installation directory and was completely valid.

The solution

It took me quite some time to track down the source of this problem but finally I got it. When I initially installed OpenOffice 3.0 the first time, I used an installation package WITHOUT an included JRE because I already had a more recent one installed. This time when I downloaded the installation files I got the default version which has an INCLUDED JRE. And seemingly, my JRE-less installation can't cope with the installation-files of 3.0 with an included JRE, throwing above stated errors at me.

I finally investigated a bit and got my hands on a setup-file without included JRE and this time modifying my setup worked flawlessly straight from the installation file itself.

The setup file with included JRE is named OOo_3.0.0_Win32Intel_install_wJRE_en-US.exe and the one without is OOo_3.0.0_Win32Intel_install_en-US.exe.

The problem here seems to be, that by default you just get JRE-including installation files from

Do the following to get a setup-file for OpenOffice 3.0 without JRE:

  1. Go to and click on "I want to download"
  2. Don't download but click the small link below called "Get more platforms and languages".
  3. On the following page there is a small checkbox "Include the Java JRE with this download" just above the list of the download links
  4. Uncheck this checkbox
  5. Choose your favorite installation below from the list, this time you get a version which has no JRE included

Maybe this little guide is helpful for others too if they hit the same problem as I have.


Just an hour ago or so I tried to set up a backup to my NAS for some of my folders. I found the tool FireSync and its description sounded promising. I tried to find some reviews but since it has been published just today. So I gave it a shot.

I should have been more careful and try out the backup in a dummy folder, not my main application collection. During the first run of the backup there was an error that "Disk is full" at the backup source. As if that's not strange enough a retry just found 4 or so changed files. Hm... After some moments I checked back the source folders. The folders were still present, but not the files which were contained within them. WTF? I made sure I didn't check the "delete after sync" checkbox and it wasn't. Nevertheless, the files were gone.

Currently I'm busy with recovering the deleted files with NTFS Undelete. So far everything looks good and it seems that I can restore all of my files.

So, if you ever want to try out the backup/sync tool FireSync, test it out in a dummy directory.


As I just have read, Microsoft has released more specifications under its Open Protocol Specifications program. I browsed a bit through it and recognized that it now also contains specifications on protocols between Exchange and Outlook. It may very well be that I'm wrong here but if this means that now the whole communication between Outlook clients and Exchange servers is documented and freely available, I guess it will now be just a matter of time until many of our beloved mail clients will not only be able to use IMAP/POP3/SMTP as the protocols to talk to MS Exchange but use the custom/propietary protocol/API for accessing the mail servers. Also, groupware and calendar applications will be able to integrate with the excellent organisation-features (appointments, people availability, address completion, etc.) of exchange into their software.

I really hope, that my interpretation of the things I saw so far is true...


In the last few days, my Skype-contacts and I have experienced several connectivity issues. Messages not being delivered or being delayed for hours (some say even days). A quick look around on the Skype homepage revealed no further insight. Altough I saw, that Skype recently reached 12 million concurrent users online, maybe they have some slight problems on the backend side...


My past effords to create a Wikipedia on DVD have stayed as a thought in my head all the time altough all my effords to create a recent edition have failed.

But now I stumbled over a link which presents an approach different to my previous tries to create all articles as separate files. Instead it sets up an unbelievable tiny webserver locally and accesses the article data directly in the .bz-ipped archives of the worlds largest encyclopedia. And in compressed form, it fits nicely on a DVD, even a recent dump. Maybe I'm trying this one out soon. Drawback: no images on the dump, just textual content.

See the Wikipedia offline reader (via) for details.


When you're working in a large team it is the case that from time to time someone has an idea how to increase the effectivity or productivity by using a new technology or tool.

The common way of incorporating enhancements in general is to do a little bit of research on the topic, then present the results to a project leader or manager and after that decide to do more investigation or incorporate the thing into the project.

The downsides of this way are, that it can take quite some time and maybe the person responsible for the final decision does not have enough knowledge and overview of the whole process and environment to be able to make the right decisions.

But sometimes this whole trouble can be saved with the what I call "just do it"-approach. This means that one or several developers set up a new tool or process and "just use it" without asking their bosses or team leads before.

At my workplace there are several examples for this and many of them have proven to enhance our daily work for a great extend. For example, some developers just set up a CVS server and began using version controlled sourcecode. Bam! Development speed went up, because they could exchange their source faster and adapt to the source changes of others more easily. But then some conflicts and subtle changes of the code more often broke existing functionality. Another "just do it" solved the problem: automated test cases. At the same time the "just do it" of Extreme Programming also enhanced our productivity and output even if we didn't adopt all of its points and requirements (for example Pair Programming, which at that time was still a cornerstone of XP but seems not to be anymore today, nowadays it seems more to resemble Ping-Pong Programming). Also the introduction of Bugzilla sped up our development speed and aided documentation and traceability of changes and decisions.

In the new project two weeks ago I saw the need for a change of the communication practices. The team has grown to a size, where discussions and decisions cannot involve the whole team anymore without ending in trouble or boredom. But letting only a small group decide on the steps affecting the whole team holds the risk of ignoring the knowledge and experience of the rest of the team. So after a small dispute with some members of the team I decided for myself to set up a forum (namely phpBB3) to get around this problem and presented it to the team. It seems now, that the forum will get accepted as a communication platform where discussions can take place utilizing the know-how of the whole team while avoiding boring and unproductive meetings. We'll see how it will integrate but the chances are not that bad that this will be another "just do it" success story.

Something which is a requirement for such successes is, is the support from the applications in questions. If it takes days to set up a server or to integrate it into an existing environment (common logins/LDAP, mail, etc.) the tool almost rules itself out of the possible candidate list for "just do it"s. It is essential that it is possible to set up an application or server quickly and with as minimal configuration effort as possible. Also it must be possible to move an application from a personal or test-machine to a server if it proves to be that successful that the need for a more reliable environment arises. The application should support backup and restore of configuration options and the database contents.

I personally think, at our workplace the largest leaps in efficiency, productivity and reliability have had their seed in such "just do it" actions. Mostly the developers theirselves know best how to improve their work so why make it hard for them to change structures or procedures? Or even worse, force applications and tools on them which add no value to their work without asking them beforehand...


2 3 4 5 6