Sunday, April 10, 2011

Is Open a dirty word now ?

Open technologies gained a lot of momentum in the past couple of years, but what became of the term Open ? Does it still hold and mean the same values it did previously ? Were the purists speaking about differences between Open and Free right ?

If you're reading this blog you're probably aware that I'm fairly fond of FOSS, both as a concept and a business model. Undoubtedly, open technologies made huge inroads in various industries - servers in the IT industry, various OSes in the telecoms industry, etc. However, there is a disturbing tendency going on in the world of Open. Meaning that Open implies Free less and less. You might be surprised at this point - what ? How can you be Open and not Free ? Well, if you look at the 'classic' FOSS definition on Wikipedia, you'll see Open Source is defined as "An open source license is a copyright license for computer software that makes the source code available for everyone to use." - this is the type of open we all cherish and love. But let's take a look at who sports the Open sticker nowadays ? The 500lbs gorilla is of course the Open Handset Alliance, and the Android Open Source project. Despite the lot of "Open" in the names, in the open is not where development happens, and lately even the source availability became selective. Another infamous example is the Open Screen Project run by Adobe intended to bring Flash related technologies for everybody - but in fact being nothing more than another industry alliance to try and grab market share from competitors, Open Source and values not accounting for much. There are other organizations that also ride the Open moniker - let's take for example the Open Design Alliance - a nonprofit organization that focuses on development of CAD-related libraries... However, open here means pay for the binaries, and pay (a lot - 25K$) to see the sources, unless a board of members deems you worthy. Not the type of open libraries you were expecting, right ? That's where the problem lies - Open is more and more a sinonym for the "For Sale" sticker, and the term "Alliance" is becoming a marketing lingo equivalent of "Cartel". This problem was recognized fairly early on by OSI and hence the term OSI approved Open Source license, but if the term Open becomes diluted enough, will it even mean anything except for marketing purposes and a counter-term for "exclusive technology" ? Will the O in FOSS become meaningless ? To be honest, the term Open was used long before Free, before the software industry, and it meant "public", mostly in the context of standards. However, now even that, decades old meaning is losing it's value. Today's Open is far too open unrelated to Free or even Public.

What's the takeaway, you might ask ? Be careful of how you yourself use the term, and be vary of supporting any project or (especially big business) initiative just because it uses the word "open". Support the values, and projects that promote those values, not the names.

Saturday, April 2, 2011

The age of pointless mobile benchmarks

Mobile devices are ever-more like their counterparts and we're starting to see benchmarks popping up on popular gadget sites. Sadly, most of these sites have not yet gone through the analytical phase of what most PC review sites did many many years ago and if effect turn into simple marketing vehicles. Let's see the common pitfalls !

To be clear I'm not talking about bad journalism, that result in articles like this (note how they manage to leave out from the title the ONE thing - GPU performance - that the article is about) even if the source a very reasonable article. I'll try to describe the differences why the methods and criteria established in the PC field that are starting to be applied to the mobile arena (like the Anand article above) make a lot less sense than they did for the original personal computers.

Pitfall no. 1: Level of integration - hardware

Mobile devices are nowadays mostly built around what is known as a SoC (System-on-Chip), which means that most components that would be separate chips or even replaceable units even on a notebook computer, are all rolled into one, which makes it very difficult to benchmark *single* components as they are inseparable from the rest of the system. How much is a NVidia vs ATI benchmark worth if you're comparing a Dell Inspiron 13" notebook with a NVidia card against a HP desktop one with an ATI and a 22" display ? To make it worse, some of the devices include additional video processing units (which again might be good only for some formats and bitrates) so the original chipset-style comparison is even more difficult to make.

Pitfall no. 2: Architectural differences

X86 CPUs is incremental - hardware backwards compatibility is pretty much a given, the only difference in capabilities is the in reality only in the timeframe when vendors add in extensions (like the different iterations of SSE). Mobile devices have to be a lot more transistor-count an usually binary compatibility is the first one to go. For example the relatively new Tegra2 chose to forego the NEON multimedia instructions that were common in Snapdragon and OMAP chips. OpenGL ES 2.0 is not backwards compatible with OpenGL ES 1.1, etc. Depending on what you use and how your software works, this makes things a lot murkier as the fact that device X works better with Quake 123 means nothing in terms whether what you want to run works better.

Pitfall no. 3: Form factor difference

A variation of the previous point - the resolutions are vastly different, but this does not only touch on the graphical performance, but also on our usage patterns and perception of performance. Nominally a device can have a better framerate, but because of screen size and distance-to-eye differences you might actually subjectively prefer the one with the marginally lower framerate.

Pitfall no. 4: Multicore is a sword with two edges

Any multicore design is only as good as the software that runs it. This was true on the desktop, but multicore is a whole different beast on mobile devices, as the quality of software determines whether that dual-core monster is better or, actually, worse than a well executed singlecore one, both in terms of performance and battery usage.

Pitfall no. 5: Software platform diversity

In the desktop and notebook world, just by taking Windows, Mac (and Linux of course ! :) you cover 99%+ of users. On the other hand, in mobile you literally have a jungle of OSes (there are almost a dozen OSes that are above the million mark) which are optimized for very different hardware and very different use-cases. This is not an issue in the X86 world because the application-level performance is comparable with at best a few percent difference, but on mobiles, due to drivers and being optimized for various modes of operation, the same HW can behave vastly differently with different OSes, so those extra FPS in a benchmark can again be very misleading.

Plenty more pitfalls, but let's not overdo it, so for the conclusion - since the complexity of these devices now rivals personal computers, it's only normal that people want to compare them the way they did with personal computers. The trouble is, these are not NEARLY as uniform as their bigger brethren so new ways of benchmarking will have to come up that will be less hardware or device oriented even if that means we'll have to give up FPS measures as such.