When you read about IT related security threats and breaches in mainstream media, it usually looks like this:
Tech sites and dedicated forums usually do a better job.
In their blog, the researchers detail how they analyzed 7 popular “connected car” apps for Android phones, that allow opening car doors and some even allow starting the engine. They found 5 types of security flaws in all of them.
Since I am part of a team working on a similar app, a couple of days later this article showed up in my work email, straight from our IT security chief.
This made me think – how bad are these flaws, really?
Unlike most stuff the good folks at Kaspersky find and publish, this time it’s not actual exploits but only potential weaknesses that could lead to discovery of exploits, and personally, I don’t think that some are even weaknesses.
So, here is the list of problems, followed by my personal analysis:
- No protection against application reverse engineering
- No code integrity check
- No rooting detection techniques
- Lack of protection against overlaying techniques
- Storage of logins and passwords in plain text
I am not a security expert, like these guys, just a regular software developer, but I’d like to think I know a thing or two about what makes apps secure.
Lets start from the bottom:
Number 5 is a real problem and the biggest one on the list. Storing passwords as plain text is about the dumbest and most dangerous thing you can do to compromise security of your entire service, and doing so on a platform that gives you dedicated secure storage for credentials with no hassle whatsoever for your users, is just inexcusable!
It is true that on Android, application data gets some protection via file permissions by default, but this protection is not good enough for sensitive data like passwords.
However, not all of the apps on the list do this. Only two of the 7 store passwords unencrypted, and 4 others store login (presumably username) unencrypted.
Storing only the user name unprotected is not necessarily a security risk. Your email address is the username for your email account, but you give that out to everyone and some times publish it in the open.
Same goes for logins for many online services and games that are used as your public screen-name.
Next is number 4: overlay protection.
This one is interesting: as the Kaspersky researchers explain in their article, Android has API that allow one app to display arbitrary size windows with varying degrees of transparency over other apps.
This ability requires a separate permission, but users often ignore permissions.
This API has legitimate uses for accessibility and convenience, I even used it my self in several apps to give my users quick access from anywhere to some tasks they needed.
Monitoring which app is in foreground is also possible, but you would need to convince the user to set you up as an accessibility service, and that is not a simple task and can not be automated without gaining root access.
So here is the rub: there is a potential for stealing user credentials with this method, but to pull it off in a seamless way most users would not notice, is very difficult. And it requires a lot of cooperation from the user: first they must install your malicious app, then they must go in to settings, ignore some severe warnings, and set it up a certain way.
I am not a malware writer either, so maybe I am missing something, but it looks to me like there are other, much more convenient exploits out there, and I have yet to see this technique show up in the real world.
So if I had to guess – I’d say it is not a very big concern. Actually, if you got your app set up as accessibility service, you could still all text from device without the overlay trick, and I can’t think of a way to properly detect when a certain app is in use without this and without root.
No we finally get to the items on the list that aren’t really problems:
Number 3: root detection. Rooted device is not necessarily a compromised device. On the contrary – the only types of root you can possibly detect are the ones the user installed of his own free will, and that means a tech savvy user who knows how to protect his device from malware.
The whole cat and mouse game around root access to phones does more harm to security than letting users have official root access from the manufacturer, but this is a topic for a separate post.
If some app uses root exploit behind its users back, it will only be available to that app, and almost impossible to detect from another app, specially one that is not suppose to be a dedicated anti-malware tool.
Therefore, I see no reason to count this as a security flaw.
Number 2: Code integrity check. This is just an overkill for each app to roll out on its own.
Android already has mandatory cryptographic signing in place for all apps that validates the integrity of every file in the APK. In latest versions of Android, v2 of the signing method was added that also validates the entire archive as a whole (if you didn’t know this, APK is actually just a zip file).
So what is the point of an app trying to check its code from inside its code?
Since Android already has app isolation and signing on a system level, any malware that gets around this, and whose maker has reversed enough of the targeted app code to modify its binary in useful ways, should have no trouble bypassing any internal code integrity check.
The amount of effort on the side of the app developer trying to protect his app, vs the small amount of effort it would take to break this protection just isn’t worth it.
Plus, a bad implementation of such integrity check could do more harm then good, by introducing bugs and hampering users of legitimate copies of the app leading to an overall bad user experience.
And finally, the big “winner”, or is it looser?
Number 1 on the list: protection from reverse engineering.
Any decent security expert will tell you that “security by obscurity” does not work!
If all it takes to break your app is to know how it works, consider it broken from the start. The most secure operating systems in the world are based on open source components, and the algorithms for the most secure encryptions are public knowledge.
Revers engineering apps is also how security experts find the vulnerabilities so the app makers can fix them. It is how the information for the article I am discussing here was gathered!
Attempting to obfuscate the code only leads to difficult debugging, and increased chance of flaws and security holes in the app.
It can be considered an anti-pattern, which is why I am surprised it is featured at the top of the list of security flaws by some one like Kasperskys experts.
Lack of reverse engineering protection is the opposite of security flaw – it is a good thing that can help find real problems!
So there you have it. Two real security issues (maybe even one and a half) out of five, and two out of seven apps actually vulnerable to the biggest one.
So what do you think? Are the connected cars really in trouble, or are the issues found by the experts minor, and the article should have actually been a lot shorter?
Also, one small funny fact: even though the writers tried to hide which apps they tested, it is pretty clear from the blurred icons in the article that one of the apps is from Kia and another one has the Volvo logo.
Since what the researchers found were not actual vulnerabilities that can be exploited right away, but rather bad practices, it would be more useful to publish the identity of the problematic apps so that users could decide if they want to take the risk.
Just putting it out there that “7 leading apps for connected cars are not secure” is likely to cause unnecessary panic among those not tech savvy enough to read through and thoroughly understand the real implications of this discovery.
Usually I don’t badmouth Java. I think its a very good programming language.
In fact, I tend to defend it in arguments on various forums.
Sure, it lacks features compared to some other languages, but then again throwing everything including a kitchen sink in to a language is not necessarily a good idea. Just look at how easy it is to get a horrible mess of code in C++ with single operator doing different things depending on context. Is
&some_var trying to get address of a variable or a reference? And what does
&&some_var do? It has nothing to do with the boolean AND operator!
So here we have a simple language friendly to new developers, which is good because there are lots of those using it on the popular Android platform.
Unfortunately, even the best languages have some implementation detail that will make you want to lynch their creators or just reap out your hair, depending on whether you externalize your violent tendencies or not.
Here is a short code example that demonstrates a bug that for about 5 minutes made me think I was high on something:
HashMap<Integer, String> map = new HashMap<>(); byte a = 42; int b = a; map.put(b, "The answer!"); if (map.containsKey(a)) System.out.println("The answer is: " + map.get(a)); else System.out.println("What was the question?");
What do you expect this code to print?
Will it even compile?
Apparently it will, but the result will surprise anyone who is not well familiar with Java’s generic types.
Yes folks – the key will not be found and the message
What was the question? will be printed.
Here is why:
The generic types in Java are not fully parameterized. Unlike a proper C++ template, some methods of generic containers take parameters of type Object, instead of the type the container instantiation was defined with.
HashMap, even though it’s
add is properly parameterized and will raise a compiler error if the wrong type key is used, the
containsKey methods take a parameter of type
Object and will not even throw a runtime exception if the wrong type is provided. They will simply return
false respectively as if the key was simply not there.
The other part of the problem is that primitive types such as
int are second class citizens in Java. They are not objects like everything else and can not be used to parameterize generics.
They do have object equivalents named
Integer but those don’t have proper operator overloading so are not convenient for all use cases.
Thus in the code sample above the variable
a gets autoboxed to
Byte, which as far as Java is concerned a completely different type that has nothing to do with
Integer and therefore there is no way to search for
Byte keys in
A language that implements proper generics would have parameterized these methods so either a compilation error occurred or an implicit cast was made.
In Java, it is up to you as a programmer to keep you key type straight even between seemingly compatible types like various size integers.
In my case I was working with a binary protocol received from external device and the function filling up the map was not the same one reading from it, so it was not straight forward to align types everywhere. But in the end I did it and learned my lesson.
Maybe this long rant will help you too. At least until a version of Java gets this part right…
I was browsing Android source code to try and understand some things about ActionBar layout, when I ran in to another little pearl showcasing Android programmers sense of humor, or is it level of boredom?
Looking at an older version of ActionBarView.java, I found a member variable called mUpGoerFive (look at line 104 in the link provided).
It held a ViewGroup, so it was important for the display part, but the name did not make sense at first.
Until I remembered this little beauty: http://xkcd.com/1133/
Whats even more funny, while I was looking for a way to link to the proper version of the source file (this variable is removed in the latest version), I ran in to the following commit message:
“Invasion of the monkeys”
I know, these are not the first easter eggs of this kind found in code released by Google, and maybe I am not the first to find them (if you seen this elsewhere, please leave a comment), but they did provide some entertainment during an otherwise tedious task, so I figured I mention them.
Richard Stallman, the father of the Free Software movement and the GNU project, always insists that people refer to some Linux based operating systems as “GNU/Linux”. This point is so important to him, he will refuse to grant an interview to anyone not willing to use the correct term.
There are people who don’t like this attitude. Some have even tried to “scientifically prove” that GNU project code comprises such a small part of a modern Linux distribution that it does not deserved to be mentioned in the name of such distributions.
Personally, I used to think that the GNU project deserved recognition for it’s crucial historical role in building freedom respecting operating systems, even if it was only a small part of a modern system.
But a recent experience proved to me that it is not about the amount of code lines or number of packages. And it is not a historical issue. There really is a huge distinction between Linux and GNU/Linux, but to notice it you have to work with a different kind of Linux. One that is not only stripped of GNU components, but of its approach to system design and user interface.
Say hello to Android. Or should I say Android/Linux…
Many people forget, it seems, that Linux is just a kernel. And as such, it is invisible to all users, advanced and novice alike. To interact with it, you need an interface, be it a text based shell or a graphical desktop.
So what happens when someone slaps a completely different user-space with a completely different set of interfaces on top of the Linux kernel?
Here is the story that prompted me to write this half rant half tip post:
My boss wanted to backup his personal data on his Android phone. This sounds like it should be simple enough to do, but the reality is quite the opposite.
In the Android security model, every application is isolated by having its own user (they are created sequentially and have names like app_123).
An application is given its own folder in the devices data partition where it is supposed to store its data such as configuration, user progress (for games) etc.
No application can access the folder of another application and read its data.
This makes sense from the security perspective, except for one major flaw: no 3rd party backup utility can ever be made. And there is no backup utility provided as part of the system.
Some device makers provide their own backup utilities, and starting with Android 4.0 there is a way to perform a backup through ADB (which is part of Android SDK), but this method is not designed for the average user and has several issues.
There is one way, an application on the device can create a proper backup: by gaining root privileges.
But Android is so “secure” it has no mechanism to allow the user to grant such privileges to an application, no matter how much he wants or needs to.
The solution of course, is to change the OS to add the needed capability, but how?
Usually, the owner of a stock Android device would look for a tool that exploits a security flaw in the system to gain root privileges. Some devices can be officially unlocked so a modified version of Android can be installed on them with root access already open.
The phone my boss has is somewhat unusual: it has a version of the OS designed for development and testing, so it has root but the applications on it do not have root.
What this confusing statement means is, that the ADB daemon is running with root privileges on the device allowing you to get a root shell on the phone from the PC and even remount the system partition as writable.
But, there is still no way for an application running on the device to gain root privileges, so when my boss tried to use Titanium Backup, he got a message that his device is not “rooted” and therefore the application will not work.
Like other “root” applications for Android, Titanium Backup needs the su binary to function. But stock Android does not have a su binary. In fact, it does not even have the cp command. Thats right – you can get a shell interface on Android that might look a little bit like the “regular Linux”, but if you want to copy a file you have to use cat.
Google wanted to avoid any GPL covered code in the user-space (i.e. anywhere they could get away with it), so not only did they not use a “real” shell (such as BASH) they didn’t even use Busybox which is the usual shell replacement in small and embedded systems. Instead, they created their own very limited (or as I call it neutered) version called “Toolbox”.
Fortunately, a lot of work has been done to remedy this, so it is not hard to find a Busybox binary ready made to run on Android powered ARM based device.
The trick is installing it. Instructions vary slightly from site to site, but I believe the following will work in most cases:
adb remount adb push busybox /system/bin adb shell chmod 6755 /system/bin/busybox adb shell busybox --install /system/bin
Note that your ADB must run as root on the device side!
The important part to notice here is line 3: you must set gid and uid bits on the busybox binary if you want it to function properly as su.
And no – I didn’t write the permissions parameter to chmod as digits to make my self look like a “1337 hax0r”. Android’s version of chmod does not accept letter parameters for permissions.
After doing the steps above I had a working busybox and a proper command shell on the phone, but the backup application still could not get root. When I installed a virtual terminal application on the phone and tried to run su manually I got the weirdest error: unknow user: root
How could this be? ls -l clearly showed files belonging to ‘root’ user. As GNU/Linux user I was used to more descriptive and helpful error messages.
I tried running ‘whoami’ from the ADB root shell, and got a similarly cryptic message: unknown uid 0
Clearly there was a root user with the proper UID 0 on the system, but busybox could not recognize it.
Googling showed that I was not the only one encountering this problem, but no solution was in sight. Some advised to reinstall busybox, others suggested playing with permissions.
Finally, something clicked: on a normal GNU/Linux system there is a file called passwd in etc folder. This file lists all the users on the system and some information for each user such as their home folder and login shell.
But Android does not use this file, and so it does not exist by default.
Yet another difference.
So I did the following:
adb shell # echo 'root::0:0:root:/root:/system/sh' >/etc/passwd
This worked like a charm and finally solved the su problem for the backup application. My boss could finally backup and restore all his data on his own, directly on the phone and without any special trickery.
Some explanation of the “magic” line:
In the passwd file each line represents a single user, and has several ‘fields’ separated by colons (:). You can read in detail about it here.
I copied the line for the root user from my PC, with some slight changes:
The second field is the password field. I left it blank so the su command will not prompt for password.
This is a horrible practice in terms of security, but on Android there is no other choice, since applications attempting to use the su command do not prompt for password.
The last field is the “login shell” which on Android is /system/sh
The su binary must be able to start a shell for the application to execute its commands.
Note, this is actually a symlink to the /system/mksh binary, and you may want to redirect it to busybox.
So this is my story of making one Android/Linux device a little more GNU/Linux device.
I took me a lot of time, trial and error and of course googling to get this done, and reminded me again that the saying “Linux is Linux” has its limits and that we should not take the GNU for granted.
It is an important part of the OS I use both at home and at work, not only in terms of components but also in terms of structure and behavior.
And it deserves to be part of the OS classification, if for no other reason than to distinguish the truly different kinds of Linux that are out there.
Please look at the following picture:
These are “smart” phones I own.
All of them have different hardware specs, but one is truly different from the others.
Can you tell which one?
It’s the one on the right – i-mate Jamin.
It is also the first “smart” phone that I ever owned.
What makes it different from the others?
The fact that it is the only one in the bunch that does not run on Free Software.
I was inspired to take this picture and put it on my blog by another post (in Hebrew), that talks about black, round corner rectangles and the recent madness surrounding them.
But I am not going to write about that.
There are already plenty of voices shouting about it all over the Internet, and I have nothing constructive to add.
Instead, I will introduce you to my lovely phone collection, which contributed a lot to my hobby and professional programming.
And we will start with the historical sample on the right: i-mate Jamin. (specs)
Back in early 2006, when this device came out, “smartphone” was still a registered trademark of Microsoft, the name they chose for the version of their Windows CE based mobile OS for devices with no touchscreen. (The touchscreen version was then called Windows Mobile Phone Edition)
Such devices were for geeks and hard core businessmen who had to be glued to their office 24/7.
But despite having a proprietery OS, this was a very open device: you could run any program on it (we didn’t call them “apps” then), and you could develop for them without the need to register or pay.
It didn’t matter what country you were from, or how old you were. The complete set of tools was available as a free download from Microsoft’s site.
And the OS allowed you to do a lot of things to it: like its desktop cousin, it completely lacked security, you could even overwrite, or more precisely “overshadow” OS files that were in ROM with a copy with the same name stored in user accessible NAND flash (or RAM on older devices).
The system API was almost identical to the Win32 API, which was (and still is) very common on the desktop, so if you knew how to write a program for your Windows powered PC, you knew how to write a program for your phone.
Unlike the systems we are used to today, Windows Mobile had no built in store.
You were on your own when it came to distributing your software, though there were several sites that acted much like the application stores do today: they sold your program for a commission.
But that too meant freedom: no commercial company was dictating morals to the developers or telling them that their program had no right to exist because it “confused users” or simply competed with that company’s own product.
So even though the OS brought with it most of the diseases common to desktop versions of Windows, it gave developers a free range, and thus had a thriving software ecosystem, until MS killed it off in a futile attempt to compete with Apple’s iOS and Google’s Android by taking the worst aspects of both.
The second phone from the right is the Neo 1973.
It was so named because 1973 was the year the first cellular call was made.
I got this device in 2008. By that time, I learned a lot about software freedom, so when I heard about a completely free (as in freedom of speech) phone, I just had to have it.
It wasn’t easy: it could only be bough directly from the company, which meant international shipping and a lot of bureaucracy with the ministry of communication that required special approval over every imported cellphone.
I was particularly concerned because this was not a commercially available model, despite having FCC certification, so it was possible that I could not get it through customs as a private citizen.
In the end, the problem was solved, though not before customs fees and added UPS charges almost doubled the cost of the device.
It felt great to have it. I never had such complete freedom with a phone before.
I had a lot to learn, and in the end, I wound up making only one usable program for my two Neo phones: the screen rotate.
One of the things that amazed me about the OpenMoko project was, that even though the software and hardware were experimental and in early stage of development, in many ways they were much better then the commercial Windows Mobile that was being sold for years to many phone makers.
For example, OpenMoko had perfect BiDi support needed for Hebrew and Arabic languages, as well as fonts for those languages shipped with the OS.
This is something MS never did for Windows Mobile, despite having a large R&D center in Israel for almost two decades, and having a large market in other countries that write right-to-left languages.
Also, the Internet browser, though slow, was much more advanced then the one on WM, and even came close to passing the Asid2 test.
The only trouble was, I could never get the microphone working. It didn’t really matter, since I wanted the phone for development and testing, and didn’t intend to carry it around with me for daily use.
Which brings us to the next phone in the collection: the Neo Freerunner.
This was the second device from the OpenMoko project, the more powerful successor to the Neo 1973.
At first, I swore I would not by it. There just wasn’t enough difference between it and the original. Sure, it had WiFi and a faster processor, but is that really a reason to by another phone?
But by that time, my trusty old Jamin was getting really old, it developed some hardware problems and even with a new battery would not charge well.
I had a lot of choice in smartphones, working for a company that developed software for them, yet I could not bare the thought of buying yet another non-free phone.
So in the end I broke, and bought the Freerunner, mostly for that nice feeling of carrying a tiny computer in my pocket, made completely with Free Software and Open Hardware.
Thanks to Doron Ofek who put a lot of effort in to advancing the OpenMoko project (and other Free Software projects) in Israel, getting the second device was much easier.
And so it became my primary and only cellphone for the next three years.
I don’t think there are too many people in the world who can honestly say they used OpenMoko phone as their primary cellphone, with no backup, but I was one of them.
Flashing a brand new OS twice a month or more (if I had time) was just part of the fun.
Sadly, all good things come to an end. The life expectancy of a smartphone is 18 month at best. I was seeing powerful Android based devices all around me, with large screens, fast processors, and, most importantly – 3G data (I spend a lot of time out of WiFi range).
And I wanted a stable device. As much as I hated to admit it I needed a break from living with a prototype phone and a rapidly changing OS.
But I wasn’t ready to loose my freedom. And I didn’t want to completely surrender my privacy.
Most Android devices need to be hacked just to get root on your own system. And even though the OS is Free Software, most of the “apps”, including built in ones, are proprietery.
And of course, Google is trying to milk every last bit of your personal information it can, and trying to keep them from doing it on Android is very uncomfortable, though definitely possible.
This just won’t do.
Finally, I found a perfect compromise:
My current phone – Nokia N900 (spec).
It was far from being a new device, when I finally ordered one thorough eBay.
Yet it was the perfect merger, borrowing from all worlds:
It runs mostly on free software, with a real GNU/Linux distribution under the hood, unlike Android which uses a modified Linux kernel, but has little in common with what most people call “Linux”.
It has a proper package manager, offering a decent selection of free software, and updates for all system components including special kernels, but also connected to Nokia’s OVI store.
It even came with a terminal emulator already installed.
Unlike the OpemMoko project, this was a finished and polished device. With a stable, simple, useful and convenient interface, widgets, and all applications working satisfactory out of the box.
It even has the flash plugin, which, though a horrible piece of proprietery software on which grave I will gladly dance, is still needed sometimes to access some sites.
So here I am now, with an outdated, but perfectly usable phone, that can do just about anything from connecting USB peripherals to mounting NFS shares.
It is perfect for me, despite it’s slightly bulky size and relatively small 3.5 inch screen.
But I know that no phone lasts for ever. Some day, the N900 will have to be retired, yet I see no successor on the horizon.
With Microsoft and Apple competing in “who can take away most user rights and get away with it”, and Android devices still containing plenty of locks, restrictions and privacy issues, I don’t know what I will buy when the time comes.
Who knows, maybe with luck and a lot of effort by some very smart people, the GTA04 will blossom in to something usable on a daily basis.
Or maybe Intel will get off their collective behinds and put out a phone with whatever Meego/Maemo/Moblin has morphed in to.
Even Mozilla is pushing out a Mobile OS of sorts, so who knows…
What do you think?
Last week a friend of mine got an email pretending to be from Linked-In.
It looked suspicious so she forwarded it to me for inspection.
One notable part of it was a large array of floating point numbers, positive and negative.
The funny thing was, the malware script was not obfuscated (aside from all whitespace being removed), so I could actually see a function called “getShellcode”.
Despite being quite long, it was easy to see that the script used some vulnerability in Flash Player versions 10.0.40 to 10.2.159 to do it’s nasty business.
I have yet to unscramble it’s shellcode payload, so I am not sure what that business is exactly.
But, this package is not unique. I am sure there are thousands of variations of it in the wild.
Why am I bothering to write about it?
I know it has some legitimate uses (though I am not sure how many).
And as a developer, the last thing I would want to encourage is reducing a programming language’s power.
But seriously, is the huge security risk really worth it?
After all, this is a Browser scripting language, something you might download and run without even being aware you are doing it.
Even the name of the function sounds almost “evil” 😛
Who is with me?
Finally, a “development” post for my “developer” blog.
Recently, I’ve been working on some XML processing programs in Python.
The minidom module is great if you want your XML in a tree, and want tag names and attributes easily accessible, but, what happens if you want the text content inside a tag?
DOM, does not have a “tag value” concept. Instead, every bit of text in the XML, including the indentation is a “text node”, which is parsed as a separate tree element.
That means, that if you have something like this:
You will get a tree with two levels: top level for “name” element, for which nodeValue will be None. This element will have a child node (second level of the tree) which will be of type TEXT_NODE an it’s values will be the text “John Smith”.
So far, so good, but, what if the value we want has some XML markup of its own?
<text>This text has <b>bold</b> and <i>italic</i> words.</text>
Now we have a complex tree on our hands with 3 levels and multiple branches.
It will look something like this:
<text> |______ |-"This text has |-<b> | |_________ | -"bold" |-"and" |-<i> | |_________ | -"italic" --"words."
As you can see, this is a big mess, with the text split in to multiple parts on two separate tree levels.
There is no facility in minidom, to get the value of our <text> tag directly.
There is however, a way around it, that is simple but not obvious: you need to “flatten” the desired tag in to an XML string, then strip the tag it self from the string and you will have a clean value.
Here is the code:
def get_tag_value(node): """retrieves value of given XML node parameter: node - node object containing the tag element produced by minidom return: content of the tag element as string """ xml_str = node.toxml() # flattens the element to string # cut off the base tag to get clean content: start = xml_str.find('>') if start == -1: return '' end = xml_str.rfind('<') if end < start: return '' return xml_str[start + 1:end]
Just pass the node you want the value of to the function and it will give you back the value as a string, including any internal markup.
I place this code in the public domain, which means you can use it anywhere any way you want with no strings attached.