Sneaking features through the back door

15/04/2017 Leave a comment

Sometimes programming language developers decide that certain practices are bad, so bad that they try to prevent their use through the language they develop.

For example: In both Java and C# multiple inheritance is not allowed. The language standard prohibits it, so trying to specify more than one base class will result in compiler error.

Another blocking “feature” these languages share, is a syntax preventing creation of derivative classes all together.

For Java, it is declaring a class to be final which might be a bit confusing for new users, since the same keyword is used to declare constants.

As an example, this will not compile:

public final class YouCanNotExtendMe {
    ...
}

public class TryingAnyway extends YouCanNotExtendMe {
    ...
}

For C# just replace final with sealed.

This can also be applied to specific methods instead of the entire class, to prevent overriding, in both languages.

While application developers may not find many uses for this feature, it shows up even in the Java standard library. Just try extending the built-in String class.

But, language features are tools in a tool box.

Each one can be both useful and misused or abused. It depends solely on the developer using the tools.

And that is why as languages evolve over the years, on some occasions their developers give up fighting the users and add some things they left out at the beginning.

Usually in a sneaky, roundabout way, to avoid admitting they were wrong or that they succumbed to peer pressure.

In this post, I will show two examples of such features, one from Java, and one from C#.

C# Extension methods

In version 3.0 of C# a new feature was added to the language: “extension methods”.

Just as their name suggests, they can be used to extend any class, including a sealed class. And you do not need access to the class implementation to use them. Just write your own class with a static method (or as many methods as you want), that has the first parameter denoted by the keyword this and of the type you want to extend.

Microsoft’s own guide gives an example of adding a method to the built in sealed String type.

Those who know and use C# will probably argue that there are two key differences between extension methods and derived classes:

  1. Extension methods do not create a new type.
    Personally, I think that will only effect compile time checks, which can be replaced with run time checks if not all instances of the ‘base’ class can be extended.
    Also, a creative workaround may be possible with attributes.
  2. Existing methods can not be overridden by extension methods.
    This is a major drawback, and I can not think of a workaround for it.
    But, you can still overload methods. And who knows what will be added in the future…

So it may not be complete, but a way to break class seals was added to the language after only two major iterations.

Multiple inheritance in Java through interfaces

Java has two separate mechanisms to support polymorphism: inheritance and interfaces.

A Java class can have only one base class it inherits from, but can implement many interfaces, and so can be referenced through these interface types.

public interface IfaceA {
    void methodA();
}

public interface IfaceB {
    void methodB();
}

public class Example implements IfaceA, IfaceB {
    @override
    public void methodA() {
        ...
    }

    @override
    public void methodB() {
        ...
    }
}

Example var0 = new Example();
IfaceA var1 = var0;
IfaceB var2 = var0;

But, before Java 8, interfaces could not contain any code, only constants and method declarations, so classes could not inherit functionality from them, as they could by extending a base class.

Thus while interfaces provided the polymorphic part of multiple inheritance, they lacked the functionality reuse part.

In Java 8 all that changed with addition of default and static methods to interfaces.

Now, an interface could contain code, and any class implementing it would inherit this functionality.

It appears that Java 9 is about to take this one step further: it will add private methods to interfaces!

Before this, everything in an interface had to be public.

This essentially erases any differences between interfaces and abstract classes, and allows multiple inheritance. But, being a back door feature, it still has some limitations compared to true multiple inheritance that is available in languages like Python and C++:

  • You can not inherit any collection of classes together. Class writer must allow joined inheritance by implementing the class as interface.
  • Unlike regular base classes, interfaces can not be instantiated on their own, even if all the methods of an interface have default implementations.
    This can be easily worked around by creating a dummy class without any code that implements the interface.
  • There are no protected methods.
    Maybe Java 10 will add them…

But basically, after 8 major iterations of the language, you can finally have full blown multiple inheritance in Java.

Conclusion

These features have their official excuses:
Extension methods are supposed to be “syntactic sugar” for “helper” and utility classes.
Default method implementation is suppose to allow extending interfaces without breaking legacy code.

But whatever the original intentions and reasoning were, the fact remains: you can have C# code that calls instance methods on objects that are not part of the original object, and you can now have Java classes that inherit “is a” type and working code from multiple sources.

And I don’t think this is a bad thing.
As long as programmers use these tools correctly, it will make code better.
Fighting your users is always a bad idea, more so if your users are developers themselves.

Do you know of any other features like this that showed up in other languages?
Let me know in the comments or by email!

Advertisements

Putting it out there…

03/04/2017 Leave a comment

When I first discovered Linux and the world of Free Software, I was already programming in the Microsoft ecosystem for several years, both as a hobby, and for a living.

I thought switching to writing Linux programs was just a matter of learning a new API.

I was wrong! Very wrong!

I couldn’t find my way through the myriad of tools, build systems, IDEs and editors, frameworks and libraries.

 

I needed to change my thinking, and get things in order. Luckily, I found an excellent book that helped me do just that: “Beginning Linux Programming” from Wrox.

Chapter by chapter this book guided me from using the terminal, all the way up to building desktop GUI programs like I was used to on Windows.

 

But you can’t learn programming just by reading a book. You have to use what you learn immediately and build something, or you will not retain any knowledge!

And that is exactly what I did: as soon as I got far enough to begin compiling C programs, I started building “Terminal Bomber”.

In hindsight, the name sounds like a malware. But it is not. It’s just a game. It is a clone of “Mine sweeper” for the terminal, using ncurses library.

 

I got as far as making it playable, adding high scores and help screens. I even managed to use Autotools for the build system.

And then I lost interest and let it rot. With the TODO file still full of items.

I never even got to packaging it for distribution, even though I planned from the start to release it under GPL v3.

 

But now, in preparation for a series of posts about quickly and easily setting up HTTP server for IPC, I finally opened a Github account.

And to fill it with something, I decided to put up a repository for this forgotten project.

 

So here we go: https://github.com/lvmtime/term_bomber

Terminal Bomber finally sees the light of day!

“Release early release often” is a known FOSS mantra. This release is kind of late for me personally, but early in the apps life cycle.

Will anything become of this code? I don’t know. It would be great if someone at least runs it.

Sadly, this is currently the only project I published that is not built for a dead OS.

 

Just putting it out there… 😛

Security risks or panic mongering?

25/02/2017 1 comment

When you read about IT related security threats and breaches in mainstream media, it usually looks like this:

cia

Tech sites and dedicated forums usually do a better job.

Last week, a new article by experts from Kaspersky Lab was doing the rounds on tech sites and forums.

In their blog, the researchers detail how they analyzed 7 popular “connected car” apps for Android phones, that allow opening car doors and some even allow starting the engine. They found 5 types of security flaws in all of them.

Since I am part of a team working on a similar app, a couple of days later this article showed up in my work email, straight from our IT security chief.

This made me think – how bad are these flaws, really?

Unlike most stuff the good folks at Kaspersky find and publish, this time it’s not actual exploits but only potential weaknesses that could lead to discovery of exploits, and personally, I don’t think that some are even weaknesses.

So, here is the list of problems, followed by my personal analysis:

  1. No protection against application reverse engineering
  2. No code integrity check
  3. No rooting detection techniques
  4. Lack of protection against overlaying techniques
  5. Storage of logins and passwords in plain text

I am not a security expert, like these guys, just a regular software developer, but I’d like to think I know a thing or two about what makes apps secure.

Lets start from the bottom:

Number 5 is a real problem and the biggest one on the list. Storing passwords as plain text is about the dumbest and most dangerous thing you can do to compromise security of your entire service, and doing so on a platform that gives you dedicated secure storage for credentials with no hassle whatsoever for your users, is just inexcusable!

It is true that on Android, application data gets some protection via file permissions by default, but this protection is not good enough for sensitive data like passwords.

However, not all of the apps on the list do this. Only two of the 7 store passwords unencrypted, and 4 others store login (presumably username)  unencrypted.

Storing only the user name unprotected is not necessarily a security risk. Your email address is the username for your email account, but you give that out to everyone and some times publish it in the open.

Same goes for logins for many online services and games that are used as your public screen-name.

Next is number 4: overlay protection.

This one is interesting: as the Kaspersky researchers explain in their article, Android has API that allow one app to display arbitrary size windows with varying degrees of transparency over other apps.

This ability requires a separate permission, but users often ignore permissions.

This API has legitimate uses for accessibility and convenience, I even used it my self in several apps to give my users quick access from anywhere to some tasks they needed.

Monitoring which app is in foreground is also possible, but you would need to convince the user to set you up as an accessibility service, and that is not a simple task and can not be automated without gaining root access.

So here is the rub: there is a potential for stealing user credentials with this method, but to pull it off in a seamless way most users would not notice, is very difficult. And it requires a lot of cooperation from the user: first they must install your malicious app, then they must go in to settings, ignore some severe warnings, and set it up a certain way.

I am not a malware writer either, so maybe I am missing something, but it looks to me like there are other, much more convenient exploits out there, and I have yet to see this technique show up in the real world.

So if I had to guess – I’d say it is not a very big concern. Actually, if you got your app set up as accessibility service, you could still all text from device without the overlay trick, and I can’t think of a way to properly detect when a certain app is in use without this and without root.

No we finally get to the items on the list that aren’t really problems:

Number 3: root detection. Rooted device is not necessarily a compromised device. On the contrary – the only types of root you can possibly detect are the ones the user installed of his own free will, and that means a tech savvy user who knows how to protect his device from malware.

The whole cat and mouse game around root access to phones does more harm to security than letting users have official root access from the manufacturer, but this is a topic for a separate post.

If some app uses root exploit behind its users back, it will only be available to that app, and almost impossible to detect from another app, specially one that is not suppose to be a dedicated anti-malware tool.

Therefore, I see no reason to count this as a security flaw.

Number 2: Code integrity check. This is just an overkill for each app to roll out on its own.

Android already has mandatory cryptographic signing in place for all apps that validates the integrity of every file in the APK. In latest versions of Android, v2 of the signing method was added that also validates the entire archive as a whole (if you didn’t know this, APK is actually just a zip file).

So what is the point of an app trying to check its code from inside its code?

Since Android already has app isolation and signing on a system level, any malware that gets around this, and whose maker has reversed enough of the targeted app code to modify its binary in useful ways, should have no trouble bypassing any internal code integrity check.

The amount of effort on the side of the app developer trying to protect his app, vs the small amount of effort it would take to break this protection just isn’t worth it.

Plus, a bad implementation of such integrity check could do more harm then good, by introducing bugs and hampering users of legitimate copies of the app leading to an overall bad user experience.

And finally, the big “winner”, or is it looser?

Number 1 on the list: protection from reverse engineering.

Any decent security expert will tell you that “security by obscurity” does not work!

If all it takes to break your app is to know how it works, consider it broken from the start. The most secure operating systems in the world are based on open source components, and the algorithms for the most secure encryptions are public knowledge.

Revers engineering apps is also how security experts find the vulnerabilities so the app makers can fix them. It is how the information for the article I am discussing here was gathered!

Attempting to obfuscate the code only leads to difficult debugging, and increased chance of flaws and security holes in the app.

It can be considered an anti-pattern, which is why I am surprised it is featured at the top of the list of security flaws by some one like Kasperskys experts.

Lack of reverse engineering protection is the opposite of security flaw – it is a good thing that can help find real problems!

So there you have it. Two real security issues (maybe even one and a half) out of five, and two out of seven apps actually vulnerable to the biggest one.

So what do you think? Are the connected cars really in trouble, or are the issues found by the experts minor, and the article should have actually been a lot shorter?

Also, one small funny fact: even though the writers tried to hide which apps they tested, it is pretty clear from the blurred icons in the article that one of the apps is from Kia and another one has the Volvo logo.

Since what the researchers found were not actual vulnerabilities that can be exploited right away, but rather bad practices, it would be more useful to publish the identity of the problematic apps so that users could decide if they want to take the risk.

Just putting it out there that “7 leading apps for connected cars are not secure” is likely to cause unnecessary panic among those not tech savvy enough to read through and thoroughly understand the real implications of this discovery.

Beware Java’s half baked generics

13/10/2016 Leave a comment

Usually I don’t badmouth Java. I think its a very good programming language.

In fact, I tend to defend it in arguments on various forums.

Sure, it lacks features compared to some other languages, but then again throwing everything including a kitchen sink in to a language is not necessarily a good idea. Just look at how easy it is to get a horrible mess of code in C++ with single operator doing different things depending on context. Is &some_var trying to get address of a variable or a reference? And what does &&some_var do? It has nothing to do with the boolean AND operator!

So here we have a simple language friendly to new developers, which is good because there are lots of those using it on the popular Android platform.

Unfortunately, even the best languages have some implementation detail that will make you want to lynch their creators or just reap out your hair, depending on whether you externalize your violent tendencies or not.

Here is a short code example that demonstrates a bug that for about 5 minutes made me think I was high on something:

HashMap<Integer, String> map = new HashMap<>();

byte a = 42;
int b = a;

map.put(b, "The answer!");

if (map.containsKey(a))
	System.out.println("The answer is: " + map.get(a));
else
	System.out.println("What was the question?");

What do you expect this code to print?

Will it even compile?

Apparently it will, but the result will surprise anyone who is not well familiar with Java’s generic types.

Yes folks – the key will not be found and the message What was the question? will be printed.

Here is why:

The generic types in Java are not fully parameterized. Unlike a proper C++ template, some methods of generic containers take parameters of type Object, instead of the type the container instantiation was defined with.

For HashMap, even though it’s add is properly parameterized and will raise a compiler error if the wrong type key is used, the get and containsKey methods take a parameter of type Object and will not even throw a runtime exception if the wrong type is provided. They will simply return null or false respectively as if the key was simply not there.

The other part of the problem is that primitive types such as byte and int are second class citizens in Java. They are not objects like everything else and can not be used to parameterize generics.

They do have object equivalents named Byte and Integer but those don’t have proper operator overloading so are not convenient for all use cases.

Thus in the code sample above the variable a gets autoboxed to Byte, which as far as Java is concerned a completely different type that has nothing to do with Integer and therefore there is no way to search for Byte keys in Integer map.

A language that implements proper generics would have parameterized these methods so either a compilation error occurred or an implicit cast was made.

In Java, it is up to you as a programmer to keep you key type straight even between seemingly compatible types like various size integers.

In my case I was working with a binary protocol received from external device and the function filling up the map was not the same one reading from it, so it was not straight forward to align types everywhere. But in the end I did it and learned my lesson.

Maybe this long rant will help you too. At least until a version of Java gets this part right…

Are Google coders bored?

30/08/2013 Leave a comment

I was browsing Android source code to try and understand some things about ActionBar layout, when I ran in to another little pearl showcasing Android programmers sense of humor, or is it level of boredom?

You decide…

Looking at an older version of ActionBarView.java, I found a member variable called mUpGoerFive (look at line 104 in the link provided).

It held a ViewGroup, so it was important for the display part, but the name did not make sense at first.

Until I remembered this little beauty: http://xkcd.com/1133/

Whats even more funny, while I was looking for a way to link to the proper version of the source file (this variable is removed in the latest version), I ran in to the following commit message:
Invasion of the monkeys

I know, these are not the first easter eggs of this kind found in code released by Google, and maybe I am not the first to find them (if you seen this elsewhere, please leave a comment), but they did provide some entertainment during an otherwise tedious task, so I figured I mention them.

Android, Busybox and the GNU project

12/11/2012 2 comments

Richard Stallman, the father of the Free Software movement and the GNU project, always insists that people refer to some Linux based operating systems as “GNU/Linux”. This point is so important to him, he will refuse to grant an interview to anyone not willing to use the correct term.

There are people who don’t like this attitude. Some have even tried to “scientifically prove” that GNU project code comprises such a small part of a modern Linux distribution that it does not deserved to be mentioned in the name of such distributions.

 

Personally, I used to think that the GNU project deserved recognition for it’s crucial historical role in building freedom respecting operating systems, even if it was only a small part of a modern system.

But a recent experience proved to me that it is not about the amount of code lines or number of packages. And it is not a historical issue. There really is a huge distinction between Linux and GNU/Linux, but to notice it you have to work with a different kind of Linux. One that is not only stripped of GNU components, but of its approach to system design and user interface.

Say hello to Android. Or should I say Android/Linux…

 

Many people forget, it seems, that Linux is just a kernel. And as such, it is invisible to all users, advanced and novice alike. To interact with it, you need an interface, be it a text based shell or a graphical desktop.

So what happens when someone slaps a completely different user-space with a completely different set of interfaces on top of the Linux kernel?

 

Here is the story that prompted me to write this half rant half tip post:

My boss wanted to backup his personal data on his Android phone. This sounds like it should be simple enough to do, but the reality is quite the opposite.

In the Android security model, every application is isolated by having its own user (they are created sequentially and have names like app_123).

An application is given its own folder in the devices data partition where it is supposed to store its data such as configuration, user progress (for games) etc.

No application can access the folder of another application and read its data.

 

This makes sense from the security perspective, except for one major flaw: no 3rd party backup utility can ever be made. And there is no backup utility provided as part of the system.

Some device makers provide their own backup utilities, and starting with Android 4.0 there is a way to perform a backup through ADB (which is part of Android SDK), but this method is not designed for the average user and has several issues.

 

There is one way, an application on the device can create a proper backup: by gaining root privileges.

But Android is so “secure” it has no mechanism to allow the user to grant such privileges to an application, no matter how much he wants or needs to.

The solution of course, is to change the OS to add the needed capability, but how?

Usually, the owner of a stock Android device would look for a tool that exploits a security flaw in the system to gain root privileges. Some devices can be officially unlocked so a modified version of Android can be installed on them with root access already open.

 

The phone my boss has is somewhat unusual: it has a version of the OS designed for development and testing, so it has root but the applications on it do not have root.

What this confusing statement means is, that the ADB daemon is running with root privileges on the device allowing you to get a root shell on the phone from the PC and even remount the system partition as writable.

But, there is still no way for an application running on the device to gain root privileges, so when my boss tried to use Titanium Backup, he got a message that his device is not “rooted” and therefore the application will not work.

 

Like other “root” applications for Android, Titanium Backup needs the su binary to function. But stock Android does not have a su binary. In fact, it does not even have the cp command. Thats right – you can get a shell interface on Android that might look a little bit like the “regular Linux”, but if you want to copy a file you have to use cat.

This is something you will not see on a GNU/Linux OS, not even other Linux based OSs designed for phones such as Maemo or SHR.

 

Google wanted to avoid any GPL covered code in the user-space (i.e. anywhere they could get away with it), so not only did they not use a “real” shell (such as BASH) they didn’t even use Busybox which is the usual shell replacement in small and embedded systems. Instead, they created their own very limited (or as I call it neutered) version called “Toolbox”.

 

Fortunately, a lot of work has been done to remedy this, so it is not hard to find a Busybox binary ready made to run on Android powered ARM based device.

The trick is installing it. Instructions vary slightly from site to site, but I believe the following will work in most cases:

adb remount
adb push busybox /system/bin
adb shell chmod 6755 /system/bin/busybox
adb shell busybox --install /system/bin

Note that your ADB must run as root on the device side!

The important part to notice here is line 3: you must set gid and uid bits on the busybox binary if you want it to function properly as su.

And no – I didn’t write the permissions parameter to chmod as digits to make my self look like a “1337 hax0r”. Android’s version of chmod does not accept letter parameters for permissions.

 

After doing the steps above I had a working busybox and a proper command shell on the phone, but the backup application still could not get root. When I installed a virtual terminal application on the phone and tried to run su manually I got the weirdest error: unknow user: root

How could this be? ls -l clearly showed files belonging to ‘root’ user. As GNU/Linux user I was used to more descriptive and helpful error messages.

I tried running ‘whoami’ from the ADB root shell, and got a similarly cryptic message: unknown uid 0

Clearly there was a root user with the proper UID 0 on the system, but busybox could not recognize it.

 

Googling showed that I was not the only one encountering this problem, but no solution was in sight. Some advised to reinstall busybox, others suggested playing with permissions.

Finally, something clicked: on a normal GNU/Linux system there is a file called passwd in etc folder. This file lists all the users on the system and some information for each user such as their home folder and login shell.

But Android does not use this file, and so it does not exist by default.

 

Yet another difference.

So I did the following:

adb shell
# echo 'root::0:0:root:/root:/system/sh' >/etc/passwd

This worked like a charm and finally solved the su problem for the backup application. My boss could finally backup and restore all his data on his own, directly on the phone and without any special trickery.

 

Some explanation of the “magic” line:

In the passwd file each line represents a single user, and has several ‘fields’ separated by colons (:). You can read in detail about it here.

I copied the line for the root user from my PC, with some slight changes:

The second field is the password field. I left it blank so the su command will not prompt for password.

This is a horrible practice in terms of security, but on Android there is no other choice, since applications attempting to use the su command do not prompt for password.

There are applications called SuperUser and SuperSU that try to ask user permission before granting root privileges, but they require a special version of the su binary which I was unable to install.

 

The last field is the “login shell” which on Android is /system/sh

The su binary must be able to start a shell for the application to execute its commands.

Note, this is actually a symlink to the /system/mksh binary, and you may want to redirect it to busybox.

 

So this is my story of making one Android/Linux device a little more GNU/Linux device.

I took me a lot of time, trial and error and of course googling to get this done, and reminded me again that the saying “Linux is Linux” has its limits and that we should not take the GNU for granted.

It is an important part of the OS I use both at home and at work, not only in terms of components but also in terms of structure and behavior.

 

And it deserves to be part of the OS classification, if for no other reason than to distinguish the truly different kinds of Linux that are out there.

 

One of these things is not like the others!

30/08/2012 2 comments

Please look at the following picture:

These are “smart” phones I own.

All of them have different hardware specs, but one is truly different from the others.

Image of four smartphones

Can you tell which one?

 

 

 

 

 

It’s the one on the right – i-mate Jamin.

It is also the first “smart” phone that I ever owned.

 

What makes it different from the others?

The fact that it is the only one in the bunch that does not run on Free Software.

 

I was inspired to take this picture and put it on my blog by another post (in Hebrew), that talks about black, round corner rectangles and the recent madness surrounding them.

But I am not going to write about that.

There are already plenty of voices shouting about it all over the Internet, and I have nothing constructive to add.

Instead, I will introduce you to my lovely phone collection, which contributed a lot to my hobby and professional programming.

 

And we will start with the historical sample on the right: i-mate Jamin. (specs)

 

Back in early 2006, when this device came out, “smartphone” was still a registered trademark of Microsoft, the name they chose for the version of their Windows CE based mobile OS for devices with no touchscreen. (The touchscreen version was then called Windows Mobile Phone Edition)

Such devices were for geeks and hard core businessmen who had to be glued to their office 24/7.

 

But despite having a proprietery OS, this was a very open device: you could run any program on it (we didn’t call them “apps” then), and you could develop for them without the need to register or pay.

It didn’t matter what country you were from, or how old you were. The complete set of tools was available as a free download from Microsoft’s site.

And the OS allowed you to do a lot of things to it: like its desktop cousin, it completely lacked security, you could even overwrite, or more precisely “overshadow” OS files that were in ROM with a copy with the same name stored in user accessible NAND flash (or RAM on older devices).

 

The system API was almost identical to the Win32 API, which was (and still is) very common on the desktop, so if you knew how to write a program for your Windows powered PC, you knew how to write a program for your phone.

Unlike the systems we are used to today, Windows Mobile had no built in store.

You were on your own when it came to distributing your software, though there were several sites that acted much like the application stores do today: they sold your program for a commission.

But that too meant freedom: no commercial company was dictating morals to the developers or telling them that their program had no right to exist because it “confused users” or simply competed with that company’s own product.

So even though the OS brought with it most of the diseases common to desktop versions of Windows, it gave developers a free range, and thus had a thriving software ecosystem, until MS killed it off in a futile attempt to compete with Apple’s iOS and Google’s Android by taking the worst aspects of both.

 

The second phone from the right is the Neo 1973.

It was so named because 1973 was the year the first cellular call was made.

 

I got this device in 2008. By that time, I learned a lot about software freedom, so when I heard about a completely free (as in freedom of speech) phone, I just had to have it.

It wasn’t easy: it could only be bough directly from the company, which meant international shipping and a lot of bureaucracy with the ministry of communication that required special approval over every imported cellphone.

I was particularly concerned because this was not a commercially available model, despite having FCC certification, so it was possible that I could not get it through customs as a private citizen.

In the end, the problem was solved, though not before customs fees and added UPS charges almost doubled the cost of the device.

 

It felt great to have it. I never had such complete freedom with a phone before.

But then I realized – I had no idea what to do with this freedom! Developing for the OpenMoko Linux distribution (later, the project switched to SHR) was very different from developing for Windows.

I had a lot to learn, and in the end, I wound up making only one usable program for my two Neo phones: the screen rotate.

 

One of the things that amazed me about the OpenMoko project was, that even though the software and hardware were experimental and in early stage of development, in many ways they were much better then the commercial Windows Mobile that was being sold for years to many phone makers.

For example, OpenMoko had perfect BiDi support needed for Hebrew and Arabic languages, as well as fonts for those languages shipped with the OS.

This is something MS never did for Windows Mobile, despite having a large R&D center in Israel for almost two decades, and having a large market in other countries that write right-to-left languages.

 

Also, the Internet browser, though slow, was much more advanced then the one on WM, and even came close to passing the Asid2 test.

 

The only trouble was, I could never get the microphone working. It didn’t really matter, since I wanted the phone for development and testing, and didn’t intend to carry it around with me for daily use.

 

Which brings us to the next phone in the collection: the Neo Freerunner.

This was the second device from the OpenMoko project, the more powerful successor to the Neo 1973.

 

At first, I swore I would not by it. There just wasn’t enough difference between it and the original. Sure, it had WiFi and a faster processor, but is that really a reason to by another phone?

But by that time, my trusty old Jamin was getting really old, it developed some hardware problems and even with a new battery would not charge well.

 

I had a lot of choice in smartphones, working for a company that developed software for them, yet I could not bare the thought of buying yet another non-free phone.

So in the end I broke, and bought the Freerunner, mostly for that nice feeling of carrying a tiny computer in my pocket, made completely with Free Software and Open Hardware.

Thanks to Doron Ofek who put a lot of effort in to advancing the OpenMoko project (and other Free Software projects) in Israel, getting the second device was much easier.

 

And so it became my primary and only cellphone for the next three years.

I don’t think there are too many people in the world who can honestly say they used OpenMoko phone as their primary cellphone, with no backup, but I was one of them.

Flashing a brand new OS twice a month or more (if I had time) was just part of the fun.

 

Sadly, all good things come to an end. The life expectancy of a smartphone is 18 month at best. I was seeing powerful Android based devices all around me, with large screens, fast processors, and, most importantly – 3G data (I spend a lot of time out of WiFi range).

And I wanted a stable device. As much as I hated to admit it I needed a break from living with a prototype phone and a rapidly changing OS.

But I wasn’t ready to loose my freedom. And I didn’t want to completely surrender my privacy.

Most Android devices need to be hacked just to get root on your own system. And even though the OS is Free Software, most of the “apps”, including built in ones, are proprietery.

And of course, Google is trying to milk every last bit of your personal information it can, and trying to keep them from doing it on Android is very uncomfortable, though definitely possible.

 

This just won’t do.

 

Finally, I found a perfect compromise:

My current phone – Nokia N900 (spec).

 

It was far from being a new device, when I finally ordered one thorough eBay.

It had a major downside compared to any Android device – it had a resistive instead of capacitive touchscreen.

 

Yet it was the perfect merger, borrowing from all worlds:

It runs mostly on free software, with a real GNU/Linux distribution under the hood, unlike Android which uses a modified Linux kernel, but has little in common with what most people call “Linux”.

It has a proper package manager, offering a decent selection of free software, and updates for all system components including special kernels, but also connected to Nokia’s OVI store.

It even came with a terminal emulator already installed.

 

Unlike the OpemMoko project, this was a finished and polished device. With a stable, simple, useful and convenient interface, widgets, and all applications working satisfactory out of the box.

It even has the flash plugin, which, though a horrible piece of proprietery software on which grave I will gladly dance, is still needed sometimes to access some sites.

 

So here I am now, with an outdated, but perfectly usable phone, that can do just about anything from connecting USB peripherals to mounting NFS shares.

It is perfect for me, despite it’s slightly bulky size and relatively small 3.5 inch screen.

 

But I know that no phone lasts for ever. Some day, the N900 will have to be retired, yet I see no successor on the horizon.

With Microsoft and Apple competing in “who can take away most user rights and get away with it”, and Android devices still containing plenty of locks, restrictions and privacy issues, I don’t know what I will buy when the time comes.

Who knows, maybe with luck and a lot of effort by some very smart people, the GTA04 will blossom in to something usable on a daily basis.

Or maybe Intel will get off their collective behinds and put out a phone with whatever Meego/Maemo/Moblin has morphed in to.

Even Mozilla is pushing out a Mobile OS of sorts, so who knows…

 

What do you think?