Category Archives: Blog

Learning from the web

While Paul Graham and many others have pointed out the virtues of web compared to normal desktop software. For Nemo Documents, working as a file manager for local files, it was quite clear that we needed a desktop application. Given this how could we then apply as much of the good things about the web into developing a Windows application?

We quickly decided on WPF as it allows one much greater freedom in designing the application. We wanted something visually appealing and in this I think wesucceeded (whether that is the case, is of course not up to me to decide :-)). But at least it is not the ordinary ugly grey programs that are so common in the Windows world. As for WPF as a framework I’m quite torn. On one side, it allows one to do a lot of fancy stuff, but on the other side there is way too-much architectureastronautingin the framework and that really hurts when you’re just trying to get something done. jQuery is a good example of how to do this right.

Another thing to learn from the web is to watch the error log and quickly fix problems people are seeing. There is nothing more frustrating as a user than software that is not working, so we made it virtue to respond quickly to errors and to get new versions into the hands of people. With the build-in auto-update feature of Visual Studio, it’s quite easy to keep already installed versions updated. It’s not quite as smooth as updated code on a server, but it goes a long way. And I really think that providing excellent customer support is key these days. When it’s so easy to go “next-door” one has to provide exceptional service to retain users.

The last point also goes hand-in-hand with agile. Getting software out and into the hands of people to get early feedback and use that to better shape the software to fit real needs. We try to release new features when we consider them stable enough for ourselves to use. And that doesn’t have to be every half year 🙂 We recently did this with the google calendar and google docs integration. A feature we coded and rolled out a month after the initial release.

A messy desktop

Earlier this week we released a new version of Nemo Documents, the biggest addition is that we integrated google into the desktop. How this improves things have been documented on the official nemo documents blog, so instead of writing more about that I want to focus on a more personal angle, namely the subject of a messy desktop versus an organized one.

My windows desktop is quite messy. You can see how it looks below. Then again, it’s mostly used to store temporary stuff. Projects we are working on always go into a neat folder structure inside our version control system.

Finding stuff in a messy desktop can sometimes be a bit tedious, but on the other hand so is cleaning up. Furthermore it might not even be a good idea to clean up too much. No matter what your mother tells you 😛 Another thing is that the urge to clean up is lessoned by the fact that you know 90% of the stuff are most likely not to be used again. But you keep it around just in case. This is where I think Nemo Documents really shines. It gives you a structured view of your files based on time and allows one to organize as much or little as needed using labels, while still maintaining the folder structure already in place. I map both my structured folders and my desktop into this view.

While talking to people about how they organize their files and documents I tend to meet two types of persons: people, like me with a messy or semi-messy structure and people in the other end with a big folder hierarchy to structure their files. The question then becomes, if Nemo Documents is only for messy people? After releasing the software we have gotten feedback from a lot of people, including the same people that are big on organizing, and from what we are hearing they are very fond of the system as well. The thing is they love structure, and this is exactly what Nemo Documents gives them. Free file organizing is always welcome I guess 🙂

Amazon Kindle review

I’ve recently bit the bullet and bought a kindle. I’ve been wanting to see the display of an ebook reader for a while but never had the chance, but the reviews of the display were all very good so I wasn’t too worried about that. With the recently announced 3rd generation kindles at a much more reasonable price I decided that it was time to see what all the fuss was about.

I have been using the device for about two weeks and so far the overall impression is very positive. I haven’t recharged the device yet and it’s still about half full. I usually read at night before going to sleep. I’ve also used the built-in Oxford dictionary the first week. This was while I was finishing off a “normal” book, The Day Of the Triffids (excellent book btw). Even just as a dictionary it also works very well because of the keyboard, the screen and the fact that you don’t need to think about charging.

I have started reading Cory Doctorow’s latest novel for the win. The book can be downloaded for free in kindle format (DRM free). It’s wonderful to see Doctorow standing by his principles and embracing the future. In the past I would download his latest work and read it on my computer, waiting for the physical book to arrive in the mail. Now I can just download it right away, read it on a very nice screen, and donate if I like the book.

Living in Denmark I must confess that I haven’t given much thought about using it to read Danish books. I mostly read books in English anyway, but at some point I’ll have to check out if and how one can borrow ebooks from the library. But that is a subject for another blog post 🙂

The good:

  • The screen. It’s better than paperback. Yes it’s that good.
  • The dictionary (I’m quite surprised of how much I’m using it)
  • Battery life
  • Very light and can fit in a ton of books

The bad:

  • Ebook prices. Why are a paperback version sometimes cheaper than the digital version?
  • DRM on books
  • PDF files can be viewed, but one really needs a bigger version as the kindle will not format the text to fit the screen properly. Hopefully this will be fixed in a firmware update sometime in the future.

And the ugly:

  • It’s not too shabby looking with the graphite 🙂

Nemo Documents released!

Today I’m pleased to announce something we at IOLA have been working on for quite a while. In essence it deals with how one can create a more humane interface for managing files an documents. By humane I mean an interface that is build with people in mind instead of computers. I’ve written a bit more elaborately on the official blog about how and why we have designed the system in the way we did. If this short teaser was enough of an appetizer, you can also just go ahead and try our beta version of Nemo Documents right now for free.

Concurrency the other way around

Clojure is built around concurrency and it clearly shows in the abstractions the language makes available. I would say that concurrency is pervasive in the language. The good thing about that it that it’s a bit harder to shoot yourself in the foot when doing programming with multiple threads. But the bad side is that it adds quite a bit of mental overhead in situations where concurrency is undesirable.

As an example in mucomp, there is a certain part of the code that deals with the audio player. This is inherently a resource that should only be handled by one thread at a time. Clojure comes with a very good abstraction for exactly this problem, agents. An agent is simply some state, that is manipulated by only thread. Using an agent is done by through sending a function to the agent that will take the old state and return a new state. So with that one gets everything that is needed to write an audio player: serialized access and safely mutable state.

The only bad thing about agents is that if one forgets to return something from a function that run on the agent, then the new state of the agent will be nil. After being bit by this two times I decided that enough was enough. One of the very nice things about languages in the lisp family, is that one can mold your own abstractions to make code better (easier to read and with less bugs in this case).

The following macro creates a new way to define functions. Functions defined in this way will check for nil on return, and return the old state instead. The only change that is needed in code is to use defa instead of defn 🙂

Do we really need record labels?

I was very sad to hear that one of the better bands in metal, The Project Hate MCMXCIX, parted way with their record label, Vic Records, after only one record because they didn’t have the money to record their latest album. I guess it’s not easy to have a band on your roster who doesn’t tour.

The new album is written and just needs the funding to get recording. One great thing about Lord K, the main man behind the band, is that he cares a lot about the sound quality. The music sounds so much better in flac than in mp3 and proper speakers / headphones of course helps a lot 🙂 *hint* Slayer and Metallica.

The band has been searching for a new record label, but has decided to try a donation experiment to see if they can get the money needed to do the recording through generous metal heads instead of a record label. I suggested that he tried kickstarter, but I guess he just wanted a low-tech email solution.

It is going to be extremely exciting how this works out. The internet and p2p networks has to a large degree in the mainstream media been associated with destroying the music industry. This is our chance to show that it can also be used to create music.

Microfunding

I’ve recently joined flattr (that’s the icon on the right you can see :-)), and just last week I read about some students from NYU who got almost 200.000$ in funding through kickstarter to write an open source facebook clone. Something is definitely buzzing in the micropayment world.

When you look at kickstarter and flattr, they are attacking the same problem, funding, at different angles. Kickstarter tries to get all the funding up front, while flattr is more of a tip jar model for something already produced. So in a way, they are complementary. I would argue that they both work best if the content is placed into the public one way or another. And that is where I think there is a huge potential.

Something like flattr creates an alternative to paywalls and an alternative to ads. And that is something I would very much like to see.

Google search is not a programmers best friend

I was playing around with google this weekend. The original problem I wanted to solve was that last.fm returns strange strange release dates for albums, so I was writing a small script that would extract the correct release date from various sources. I was aiming at www.metal-archives.com and wikipedia. Both of these sites have different search pages, and in general I’ve come to rely more on google’s site:xxx functionaly, than on individual pages own search engines. So I thought, why not just use google programmatically to search the sites. Seems easy enough.

Failure 1 (I’m feeling lucky):

Google has a very nice feature called “I’m feeling lucky” that will direct you to the first result. If I could specify my queries good enough, I could rely on that, and not have to parse google to get the url. It’s very simple, you just add &btnI at the end of your query and google will redirect. Sadly it works fine most of the time, but sometimes it just fails to redirect you. I couldn’t find any patterns to this randomness and a “works sometimes solution” is not a good one 😐

Failure 2 (google ajax):

I then found out that google has a seemingly very nice api that lets you do queries and get JSON back. JSON is easy to work with and it also allows one to go through several results, in case google doesn’t return the right one as the first result. After a bit poking around I found out that google ajax randomly returns different results from the normal google. It’s like using Yahoo instead of google. A bit of poking around returns the following 2 year old bug report. Furthermore the TOS directly forbids using the API for this kind of activity. Oh well, it didn’t work anyway.

Failure 3 (parsing google results directly):

After two bitter defeats I thought screw it, I’ll just parse the damn google result pages, how hard can it be? At least I know that it gives me the right results. So I did that, coded everything up and checked that things was working. Then let it loose on my collection (2×275 requests) and around the middle it stopped working. I poked a bit around, and found out that google has identified my program as a bad boy and decided to spank it by returning a “Please identify yourself as a human” page back instead of the normal google result page.

As a side note, after 3 bitter defeats I was ready to jump ship and try bing or Yahoo. That was a quick detour though, as none of them where up for the challenge of returning good results.

Channel downmixing in MPlayer

Recently I have been playing with downmixing in MPlayer.  When I bought new speakers, I decided to go with stereo instead of surround since I mostly listen to music. As anyone using mplayer or any “derived” players such as vlc have discovered, there is a incredible annoying problem that the voices of the actors are very low, actually in general the sound is very low. It appears that when mixing to two speakers, the center channels is put very low in the mix. The same could be said about the subwoofer although it’s naturally not as easily recognized.

A quick google revealed that MPlayer has several tricks (audio filters) that might potentially work: volume, volnorm, pan, hrtf. I quickly discarded volume and volnorm since I don’t want to just boost the sound, I want it to distribute the channels properly. hrtf seemed like a good simple choice, since pan looked very complex. Sadly in the middle of Harry Potter I had to turn it off because it was making lots of clipping of the sound. So I was left with pan. It took a while to get a good default, but a bit of googling around revealed one with a decent default. I first just tried turning sub + center up to one but in one or the other movie introduced the dreaded clipping. So I had to keep it down a bit while still retaining decent boost of center and sub. After an afternoon of testing I came to the following “magic” formula:

-channels 6 -af pan=2:0.4:0:0:0.4:0.2:0:0:0.2:0.3:0.3:0.1:0.1

Please do note that one needs to add a -channels 6 in order for mplayer to decode all 6 channels so that it can mix it down to two. One can read more about the pan filter here.

How far have we come?

Things like these makes me wonder, with all the advances in computer science how far have we really come?

  • 40 years after the invention of relational databases we are still manually defining indexes
  • 40 years after the invention of Unix, the scheduler in Android (= Linux) still does a terrible job at scheduling the tasks that really depend on it (games and audio)