Archive

Archive for the ‘Rants’ Category

Sneaking features through the back door

15/04/2017 Leave a comment

Sometimes programming language developers decide that certain practices are bad, so bad that they try to prevent their use through the language they develop.

For example: In both Java and C# multiple inheritance is not allowed. The language standard prohibits it, so trying to specify more than one base class will result in compiler error.

Another blocking “feature” these languages share, is a syntax preventing creation of derivative classes all together.

For Java, it is declaring a class to be final which might be a bit confusing for new users, since the same keyword is used to declare constants.

As an example, this will not compile:

public final class YouCanNotExtendMe {
    ...
}

public class TryingAnyway extends YouCanNotExtendMe {
    ...
}

For C# just replace final with sealed.

This can also be applied to specific methods instead of the entire class, to prevent overriding, in both languages.

While application developers may not find many uses for this feature, it shows up even in the Java standard library. Just try extending the built-in String class.

But, language features are tools in a tool box.

Each one can be both useful and misused or abused. It depends solely on the developer using the tools.

And that is why as languages evolve over the years, on some occasions their developers give up fighting the users and add some things they left out at the beginning.

Usually in a sneaky, roundabout way, to avoid admitting they were wrong or that they succumbed to peer pressure.

In this post, I will show two examples of such features, one from Java, and one from C#.

C# Extension methods

In version 3.0 of C# a new feature was added to the language: “extension methods”.

Just as their name suggests, they can be used to extend any class, including a sealed class. And you do not need access to the class implementation to use them. Just write your own class with a static method (or as many methods as you want), that has the first parameter denoted by the keyword this and of the type you want to extend.

Microsoft’s own guide gives an example of adding a method to the built in sealed String type.

Those who know and use C# will probably argue that there are two key differences between extension methods and derived classes:

  1. Extension methods do not create a new type.
    Personally, I think that will only effect compile time checks, which can be replaced with run time checks if not all instances of the ‘base’ class can be extended.
    Also, a creative workaround may be possible with attributes.
  2. Existing methods can not be overridden by extension methods.
    This is a major drawback, and I can not think of a workaround for it.
    But, you can still overload methods. And who knows what will be added in the future…

So it may not be complete, but a way to break class seals was added to the language after only two major iterations.

Multiple inheritance in Java through interfaces

Java has two separate mechanisms to support polymorphism: inheritance and interfaces.

A Java class can have only one base class it inherits from, but can implement many interfaces, and so can be referenced through these interface types.

public interface IfaceA {
    void methodA();
}

public interface IfaceB {
    void methodB();
}

public class Example implements IfaceA, IfaceB {
    @override
    public void methodA() {
        ...
    }

    @override
    public void methodB() {
        ...
    }
}

Example var0 = new Example();
IfaceA var1 = var0;
IfaceB var2 = var0;

But, before Java 8, interfaces could not contain any code, only constants and method declarations, so classes could not inherit functionality from them, as they could by extending a base class.

Thus while interfaces provided the polymorphic part of multiple inheritance, they lacked the functionality reuse part.

In Java 8 all that changed with addition of default and static methods to interfaces.

Now, an interface could contain code, and any class implementing it would inherit this functionality.

It appears that Java 9 is about to take this one step further: it will add private methods to interfaces!

Before this, everything in an interface had to be public.

This essentially erases any differences between interfaces and abstract classes, and allows multiple inheritance. But, being a back door feature, it still has some limitations compared to true multiple inheritance that is available in languages like Python and C++:

  • You can not inherit any collection of classes together. Class writer must allow joined inheritance by implementing the class as interface.
  • Unlike regular base classes, interfaces can not be instantiated on their own, even if all the methods of an interface have default implementations.
    This can be easily worked around by creating a dummy class without any code that implements the interface.
  • There are no protected methods.
    Maybe Java 10 will add them…

But basically, after 8 major iterations of the language, you can finally have full blown multiple inheritance in Java.

Conclusion

These features have their official excuses:
Extension methods are supposed to be “syntactic sugar” for “helper” and utility classes.
Default method implementation is suppose to allow extending interfaces without breaking legacy code.

But whatever the original intentions and reasoning were, the fact remains: you can have C# code that calls instance methods on objects that are not part of the original object, and you can now have Java classes that inherit “is a” type and working code from multiple sources.

And I don’t think this is a bad thing.
As long as programmers use these tools correctly, it will make code better.
Fighting your users is always a bad idea, more so if your users are developers themselves.

Do you know of any other features like this that showed up in other languages?
Let me know in the comments or by email!

Advertisements

Security risks or panic mongering?

25/02/2017 1 comment

When you read about IT related security threats and breaches in mainstream media, it usually looks like this:

cia

Tech sites and dedicated forums usually do a better job.

Last week, a new article by experts from Kaspersky Lab was doing the rounds on tech sites and forums.

In their blog, the researchers detail how they analyzed 7 popular “connected car” apps for Android phones, that allow opening car doors and some even allow starting the engine. They found 5 types of security flaws in all of them.

Since I am part of a team working on a similar app, a couple of days later this article showed up in my work email, straight from our IT security chief.

This made me think – how bad are these flaws, really?

Unlike most stuff the good folks at Kaspersky find and publish, this time it’s not actual exploits but only potential weaknesses that could lead to discovery of exploits, and personally, I don’t think that some are even weaknesses.

So, here is the list of problems, followed by my personal analysis:

  1. No protection against application reverse engineering
  2. No code integrity check
  3. No rooting detection techniques
  4. Lack of protection against overlaying techniques
  5. Storage of logins and passwords in plain text

I am not a security expert, like these guys, just a regular software developer, but I’d like to think I know a thing or two about what makes apps secure.

Lets start from the bottom:

Number 5 is a real problem and the biggest one on the list. Storing passwords as plain text is about the dumbest and most dangerous thing you can do to compromise security of your entire service, and doing so on a platform that gives you dedicated secure storage for credentials with no hassle whatsoever for your users, is just inexcusable!

It is true that on Android, application data gets some protection via file permissions by default, but this protection is not good enough for sensitive data like passwords.

However, not all of the apps on the list do this. Only two of the 7 store passwords unencrypted, and 4 others store login (presumably username)  unencrypted.

Storing only the user name unprotected is not necessarily a security risk. Your email address is the username for your email account, but you give that out to everyone and some times publish it in the open.

Same goes for logins for many online services and games that are used as your public screen-name.

Next is number 4: overlay protection.

This one is interesting: as the Kaspersky researchers explain in their article, Android has API that allow one app to display arbitrary size windows with varying degrees of transparency over other apps.

This ability requires a separate permission, but users often ignore permissions.

This API has legitimate uses for accessibility and convenience, I even used it my self in several apps to give my users quick access from anywhere to some tasks they needed.

Monitoring which app is in foreground is also possible, but you would need to convince the user to set you up as an accessibility service, and that is not a simple task and can not be automated without gaining root access.

So here is the rub: there is a potential for stealing user credentials with this method, but to pull it off in a seamless way most users would not notice, is very difficult. And it requires a lot of cooperation from the user: first they must install your malicious app, then they must go in to settings, ignore some severe warnings, and set it up a certain way.

I am not a malware writer either, so maybe I am missing something, but it looks to me like there are other, much more convenient exploits out there, and I have yet to see this technique show up in the real world.

So if I had to guess – I’d say it is not a very big concern. Actually, if you got your app set up as accessibility service, you could still all text from device without the overlay trick, and I can’t think of a way to properly detect when a certain app is in use without this and without root.

No we finally get to the items on the list that aren’t really problems:

Number 3: root detection. Rooted device is not necessarily a compromised device. On the contrary – the only types of root you can possibly detect are the ones the user installed of his own free will, and that means a tech savvy user who knows how to protect his device from malware.

The whole cat and mouse game around root access to phones does more harm to security than letting users have official root access from the manufacturer, but this is a topic for a separate post.

If some app uses root exploit behind its users back, it will only be available to that app, and almost impossible to detect from another app, specially one that is not suppose to be a dedicated anti-malware tool.

Therefore, I see no reason to count this as a security flaw.

Number 2: Code integrity check. This is just an overkill for each app to roll out on its own.

Android already has mandatory cryptographic signing in place for all apps that validates the integrity of every file in the APK. In latest versions of Android, v2 of the signing method was added that also validates the entire archive as a whole (if you didn’t know this, APK is actually just a zip file).

So what is the point of an app trying to check its code from inside its code?

Since Android already has app isolation and signing on a system level, any malware that gets around this, and whose maker has reversed enough of the targeted app code to modify its binary in useful ways, should have no trouble bypassing any internal code integrity check.

The amount of effort on the side of the app developer trying to protect his app, vs the small amount of effort it would take to break this protection just isn’t worth it.

Plus, a bad implementation of such integrity check could do more harm then good, by introducing bugs and hampering users of legitimate copies of the app leading to an overall bad user experience.

And finally, the big “winner”, or is it looser?

Number 1 on the list: protection from reverse engineering.

Any decent security expert will tell you that “security by obscurity” does not work!

If all it takes to break your app is to know how it works, consider it broken from the start. The most secure operating systems in the world are based on open source components, and the algorithms for the most secure encryptions are public knowledge.

Revers engineering apps is also how security experts find the vulnerabilities so the app makers can fix them. It is how the information for the article I am discussing here was gathered!

Attempting to obfuscate the code only leads to difficult debugging, and increased chance of flaws and security holes in the app.

It can be considered an anti-pattern, which is why I am surprised it is featured at the top of the list of security flaws by some one like Kasperskys experts.

Lack of reverse engineering protection is the opposite of security flaw – it is a good thing that can help find real problems!

So there you have it. Two real security issues (maybe even one and a half) out of five, and two out of seven apps actually vulnerable to the biggest one.

So what do you think? Are the connected cars really in trouble, or are the issues found by the experts minor, and the article should have actually been a lot shorter?

Also, one small funny fact: even though the writers tried to hide which apps they tested, it is pretty clear from the blurred icons in the article that one of the apps is from Kia and another one has the Volvo logo.

Since what the researchers found were not actual vulnerabilities that can be exploited right away, but rather bad practices, it would be more useful to publish the identity of the problematic apps so that users could decide if they want to take the risk.

Just putting it out there that “7 leading apps for connected cars are not secure” is likely to cause unnecessary panic among those not tech savvy enough to read through and thoroughly understand the real implications of this discovery.

Android, Busybox and the GNU project

12/11/2012 2 comments

Richard Stallman, the father of the Free Software movement and the GNU project, always insists that people refer to some Linux based operating systems as “GNU/Linux”. This point is so important to him, he will refuse to grant an interview to anyone not willing to use the correct term.

There are people who don’t like this attitude. Some have even tried to “scientifically prove” that GNU project code comprises such a small part of a modern Linux distribution that it does not deserved to be mentioned in the name of such distributions.

 

Personally, I used to think that the GNU project deserved recognition for it’s crucial historical role in building freedom respecting operating systems, even if it was only a small part of a modern system.

But a recent experience proved to me that it is not about the amount of code lines or number of packages. And it is not a historical issue. There really is a huge distinction between Linux and GNU/Linux, but to notice it you have to work with a different kind of Linux. One that is not only stripped of GNU components, but of its approach to system design and user interface.

Say hello to Android. Or should I say Android/Linux…

 

Many people forget, it seems, that Linux is just a kernel. And as such, it is invisible to all users, advanced and novice alike. To interact with it, you need an interface, be it a text based shell or a graphical desktop.

So what happens when someone slaps a completely different user-space with a completely different set of interfaces on top of the Linux kernel?

 

Here is the story that prompted me to write this half rant half tip post:

My boss wanted to backup his personal data on his Android phone. This sounds like it should be simple enough to do, but the reality is quite the opposite.

In the Android security model, every application is isolated by having its own user (they are created sequentially and have names like app_123).

An application is given its own folder in the devices data partition where it is supposed to store its data such as configuration, user progress (for games) etc.

No application can access the folder of another application and read its data.

 

This makes sense from the security perspective, except for one major flaw: no 3rd party backup utility can ever be made. And there is no backup utility provided as part of the system.

Some device makers provide their own backup utilities, and starting with Android 4.0 there is a way to perform a backup through ADB (which is part of Android SDK), but this method is not designed for the average user and has several issues.

 

There is one way, an application on the device can create a proper backup: by gaining root privileges.

But Android is so “secure” it has no mechanism to allow the user to grant such privileges to an application, no matter how much he wants or needs to.

The solution of course, is to change the OS to add the needed capability, but how?

Usually, the owner of a stock Android device would look for a tool that exploits a security flaw in the system to gain root privileges. Some devices can be officially unlocked so a modified version of Android can be installed on them with root access already open.

 

The phone my boss has is somewhat unusual: it has a version of the OS designed for development and testing, so it has root but the applications on it do not have root.

What this confusing statement means is, that the ADB daemon is running with root privileges on the device allowing you to get a root shell on the phone from the PC and even remount the system partition as writable.

But, there is still no way for an application running on the device to gain root privileges, so when my boss tried to use Titanium Backup, he got a message that his device is not “rooted” and therefore the application will not work.

 

Like other “root” applications for Android, Titanium Backup needs the su binary to function. But stock Android does not have a su binary. In fact, it does not even have the cp command. Thats right – you can get a shell interface on Android that might look a little bit like the “regular Linux”, but if you want to copy a file you have to use cat.

This is something you will not see on a GNU/Linux OS, not even other Linux based OSs designed for phones such as Maemo or SHR.

 

Google wanted to avoid any GPL covered code in the user-space (i.e. anywhere they could get away with it), so not only did they not use a “real” shell (such as BASH) they didn’t even use Busybox which is the usual shell replacement in small and embedded systems. Instead, they created their own very limited (or as I call it neutered) version called “Toolbox”.

 

Fortunately, a lot of work has been done to remedy this, so it is not hard to find a Busybox binary ready made to run on Android powered ARM based device.

The trick is installing it. Instructions vary slightly from site to site, but I believe the following will work in most cases:

adb remount
adb push busybox /system/bin
adb shell chmod 6755 /system/bin/busybox
adb shell busybox --install /system/bin

Note that your ADB must run as root on the device side!

The important part to notice here is line 3: you must set gid and uid bits on the busybox binary if you want it to function properly as su.

And no – I didn’t write the permissions parameter to chmod as digits to make my self look like a “1337 hax0r”. Android’s version of chmod does not accept letter parameters for permissions.

 

After doing the steps above I had a working busybox and a proper command shell on the phone, but the backup application still could not get root. When I installed a virtual terminal application on the phone and tried to run su manually I got the weirdest error: unknow user: root

How could this be? ls -l clearly showed files belonging to ‘root’ user. As GNU/Linux user I was used to more descriptive and helpful error messages.

I tried running ‘whoami’ from the ADB root shell, and got a similarly cryptic message: unknown uid 0

Clearly there was a root user with the proper UID 0 on the system, but busybox could not recognize it.

 

Googling showed that I was not the only one encountering this problem, but no solution was in sight. Some advised to reinstall busybox, others suggested playing with permissions.

Finally, something clicked: on a normal GNU/Linux system there is a file called passwd in etc folder. This file lists all the users on the system and some information for each user such as their home folder and login shell.

But Android does not use this file, and so it does not exist by default.

 

Yet another difference.

So I did the following:

adb shell
# echo 'root::0:0:root:/root:/system/sh' >/etc/passwd

This worked like a charm and finally solved the su problem for the backup application. My boss could finally backup and restore all his data on his own, directly on the phone and without any special trickery.

 

Some explanation of the “magic” line:

In the passwd file each line represents a single user, and has several ‘fields’ separated by colons (:). You can read in detail about it here.

I copied the line for the root user from my PC, with some slight changes:

The second field is the password field. I left it blank so the su command will not prompt for password.

This is a horrible practice in terms of security, but on Android there is no other choice, since applications attempting to use the su command do not prompt for password.

There are applications called SuperUser and SuperSU that try to ask user permission before granting root privileges, but they require a special version of the su binary which I was unable to install.

 

The last field is the “login shell” which on Android is /system/sh

The su binary must be able to start a shell for the application to execute its commands.

Note, this is actually a symlink to the /system/mksh binary, and you may want to redirect it to busybox.

 

So this is my story of making one Android/Linux device a little more GNU/Linux device.

I took me a lot of time, trial and error and of course googling to get this done, and reminded me again that the saying “Linux is Linux” has its limits and that we should not take the GNU for granted.

It is an important part of the OS I use both at home and at work, not only in terms of components but also in terms of structure and behavior.

 

And it deserves to be part of the OS classification, if for no other reason than to distinguish the truly different kinds of Linux that are out there.

 

eval is evil!

09/06/2012 Leave a comment

Last week a friend of mine got an email pretending to be from Linked-In.

It looked suspicious so she forwarded it to me for inspection.

 

A quick look at the HTML attachment showed that it contained some very fishy JavaScript.

One notable part of it was a large array of floating point numbers, positive and negative.

 

As some of you might have guessed, this array actually represented some more scrambled JavaScript.

 

Now, I am not a security expert, but I was curious what this thing did. I know there is some tool to test run JavaScript, but I did not remember what it was called, so I just run Python in interactive mode to make a quick loop and unscramble the floating point array.

 

What I found was JavaScript redirecting the browser to a very suspicious looking domain.

Downloading the content of the URL resulted in more JavaScript, this time with a very long sting (over 54000 bytes long!).

 

Again I found the unscrambling function, redone it in Python, and received what was clearly a malware injecting JavaScript that was just over 15 thousand bytes long!

The funny thing was, the malware script was not obfuscated (aside from all whitespace being removed), so I could actually see a function called “getShellcode”.

 

Despite being quite long, it was easy to see that the script used some vulnerability in Flash Player versions 10.0.40 to 10.2.159 to do it’s nasty business.

I have yet to unscramble it’s shellcode payload, so I am not sure what that business is exactly.

 

But, this package is not unique. I am sure there are thousands of variations of it in the wild.

 

Why am I bothering to write about it?

 

Because the main component used to hide the truth about what this malware does is JavaScript eval function.

In fact, it is used twice, both in the first stage JavaScript attached to an email, and in the second stage script that actually tries to inject the malware.

 

Which got me wondering: why the hell did the designers of JavaScript put it there???

 

I know JavaScript is not the only language to have such function.

I know it has some legitimate uses (though I am not sure how many).

And as a developer, the last thing I would want to encourage is reducing a programming language’s power.

 

But seriously, is the huge security risk really worth it?

After all, this is a Browser scripting language, something you might download and run without even being aware you are doing it.

 

Even the name of the function sounds almost “evil” 😛

 

So here is my rant of the day: ban eval from JavaScript.

Who is with me?

It’s all about trust

02/04/2011 Leave a comment

We use computers for a lot of things today. In various forms from the desktop to the “Smartphone“.

At work, at home, for business, for pleasure, for education, for communication.

Never before has there been a single object that centered so much of our daily activity around it, intruded in to so many aspects of our life.

Yet, how many people ever stop to think: “Can I trust my computer?”

How many people even consider such a question as making sense? You trust (or don’t trust) people, but things?

Trouble is, we need to be able to trust our computers, not just to work and do what we need of them, but also to keep our secrets from strangers and protect our wealth from theft.

Most people, be they “simple” users who barely understand how to use their computer, or greatest of hackers who know precisely how everything works and how to make it do anything they want, trust their computers implicitly, to varying degrees.

But a computer is a collection of black boxes of two kinds: hardware and software.

They are “black boxes” for two reasons:

1. The people who produce them do their best to hide from us users what they do and how they work.

2. Majority of users don’t posses the knowledge to understand their workings even if they were revealed to them.

Number two is a matter for separate discussion, but as for reason number one: It can and should be avoided!

And this is why I use Free Software and open hardware whenever possible.

This way I can be as certain as possible that my computer (tablet / smartphone / thingamajig) does exactly what it promises and is under my control and not someone else’s.

The skeptics among you will probably say: “You couldn’t possibly read and evaluate every single line of code of every application you use, let alone go over schematics of your hardware.”

And that is true. But, I don’t need to. It is enough for me to know that my phone’s bootloader is not locked or encrypted which means I can update or replace my phone’s OS at any time.

It is enough for me to know that I do not have any “Treacherous computing” features in my PC which could lock me out of my own data at any moment.

Most important, I know my OS and its developers do not, and never will treat me as a thief.

I know this, because the code (and some of the specs) are out in the open. Because they are constantly inspected by thousands of eyes, and  because, people and companies that are involved in making these products are interested in making money by providing solutions, rather then just trying to extract it as best they can by providing a product which they continue to control even after you supposedly purchased it.

And, of course, I do not have to rely on this knowledge blindly. When true need arises, I can go and expect the source, or consult people I trust to look in to the guts of the thing and tell me if it’s safe and reliable.

Now, before you recommend that I put on a tin foil hat, consider this growing list of examples where people’s computers betrayed them, at the whim of their true masters:

And this list can, and probably will keep growing as long as people use closed, proprietery software running on locked devices.

As seen from the last example, even if a large part of your system is Free, as in the case of Android, a single crucial application like the Android Market which is not Free, can take your freedom away.

And next time, it may not be about a book or a feature for geeks, or some music CDs. It may be your life’s work that goes missing, a crucial report for work or school, precious family films or photographs that can not be recreated.

I am glad I found out about about free software before I suffered any such loss.

Free Software is not just about price or ideology or a way for developers to get their hands on some code. It’s about your freedom and security!

I trust the software I use, do you?

Trusted Computing from lafkon on Vimeo.

Categories: Rants Tags: , ,

Welcome back to the 70’s!

08/12/2010 Leave a comment

Google has finally started a pilot program for netbooks (mini-laptops) running the long awaited (by some) Chrome OS.

Sadly, there aren’t many details about the hardware itself, and the only 3 things we know for sure are:

  1. It has 12 inch screen
  2. It has WiFi n and 3G connectivity
  3. It weighs nearly 2kg! (3.8 pounds which is 1.72kg to be precise)

But there is more to this than just another netbook that strides the border between mini and regular laptops.

Google hails it’s Chrome OS as a new paradigm in computing where “your browser is your OS”. To quote the introduction page: “It runs web-based applications, not legacy PC software.”

To me, this statement is funny, in fact, it appears to be an oxymoron.

I was born in to the PC era. My first computer was 100MHz Pentium 586, and though it still had a turbo button by some archaic standards it might as well have been a “super computer”.

In fact, it was top of the line for home PCs at the time it was purchased.

Technology moved on quickly, so today, you are unlikely to find a smartphone which has a processor slower than 500MHz.

This was the promise of the “Age of the PC”: That any one can get his own computer powerful enough to do whatever the user needed on its own.

Play games, listen to music, watch movies, edit documents – you did not need to rely on anyone.

Most people who are not geeks probably do not know this today, but back before the personal computers hit mass market during the mid 80’s things were very different.

In the 70’s and early 80’s computers were still huge and so expansive that only large organizations such as universities and corporations could own them. To use a computer back then one would use a “terminal” – a dumb screen with a keyboard that connected to the actual computer over some sort of network.

Though several people could use the computer from different terminals at the same time, each user would still get very limited (even by standards of that time) resources allocated to his account.

Each user got certain amount of storage space for his files, certain amount of memory for his programs to use and certain amount of “cpu time” to run his programs.

For nontechnical people reading this (if there are any) think of this as the limits on your email accounts: you can only send attachments of certain size, and you can only keep so much mail in your inbox before it becomes full.

Now imagine that your computer is not really yours: other people are using it, and you have to wait for them to leave enough free resources for you to be able to use it.

This is why personal computers where such a big deal for many people – they could finally use a computer and do what they wanted or needed without “standing in line” or “asking permission” from anyone.

And now, this is the “future” Google is promising us with Chrome OS.

Return from the personal computer to the mainframe and dumb terminal architecture of the 70’s.

Of course, there are are advantages to the “cloud” approach:

  • Doesn’t matter where you are: since all of your data and programs you need are “in the cloud” (on some remote Internet server) it does not matter if you have your laptop with you or if you are at your desktop. You can access them from any suitable device as long as it has an Internet connection.
  • No maintenance – forget installing software of dealing with viruses: since your computer does virtually nothing except receive and send information to and from the net there’s nothing to do but turn it on.
  • Your data is safer – some people will argue that huge corporation like Google or Amazon has better backup facilities than the average computer user, and that their servers are better protected against hackers than your home PC, so you should trust them with all your precious files.

But looking at these supposed benefits, you can see that each of them hides several very real dangers:

  • If you can access your files from anywhere, so can anyone who happens to get their hands on your username and password. You may not care if someone gets the photos of your cat, but what about some naughty pictures from your bedroom or your company’s latest financial strategy document?
  • No control. When all your data and all the programs you use to manipulate this data are on someone else’s server they are under their control. What if the company you are relying on to provide your cloud computing account suddenly goes out of business? What if they decide not to provide service to your country due to export restrictions, or what if they just lock your account because of a clerical error or because someone decided you violated some terms of use?
    Even worse: what if they suddenly change account settings and expose documents you wanted to remain private? This has happed to Facebook users and with Google Buzz.
    And what about the applications you use? If they are in the cloud your choice is limited to what your cloud provider gives you. If your provider decides your can only use brand X of document editor, than you will be forced to use only that brand, and if the provider suddenly decides to switch to brand Y? Go with it or change providers (good luck moving all your data and keeping it intact).
  • Putting all your eggs in one basket. Yes, huge server farms that big corporations own are usually very sturdy. They have UPS, backups, dedicated technicians and all kinds of other goodies, but in the end, even they fail. Yes, even the mighty Google has outages. Also, the bigger the server (or farm) the more attractive target it is for hackers. And that means it will get hit much more and with much bigger force than some Jon Doe’s personal computer.
    If something happens to your PC and it stops working you can usually use your laptop, or your work computer or, in worst case scenario barrow your friends computer to finish whatever you need to finish urgently. But with all computing done in the cloud, once the cloud goes down, all computers go down.

To me personally, the lack of control and privacy that comes with cloud computing makes it completely unacceptable as an absolute replacement for the desktop.

There are uses of the cloud that are acceptable to me, and which I believe are unavoidable for an average person:

I trust my personal mail to gmail, because I do not know how to setup and run properly configured email server. Even if I did, I am not sure I would have the time to do a good enough job to keep it properly secure and not fall in to some spammers hands.

I also use Google docs and Picasa for images and documents I need to make publicly available. Since there is no privacy concern here, I don’t mind surrendering them.

And of course, I use worldpress.com for this blog, because I am certainly not about to setup my own installation of world press.

But all these uses are very far from the future Google and some others are planning for us.

With storage devices getting ridiculously huge in capacity and ever smaller in physical size (did you know you can get 32GB of storage on microSD the size of your thumbnail?), and the rest of computer hardware still becoming more powerful and cheaper at the same time, while Internet bandwidth continuing to be limited and expansive (in certain countries much more than in others) I hope most people will think twice before embracing this “futuristic” idea from way back in the 1970’s

Categories: Rants Tags: , , , ,