Philosophical Musings

August 17, 2007

Do We Really Want Smarter Software?

Filed under: AI,software — Elad Kehat @ 1:32 pm

I recently read these excerpts from Microsoft’s Craig Mundie, quoted on Geeking with Greg:

 

Think if the computer was really much more personalized in terms of what it did for you.
It will become more humanistic – your ability to interact with the machine more the way you would interact with other people will accrue from this big increase in power.
It … [will] adapt more to the environment and your needs and the things that are going on around you … The way in which you will be able to interact with it will be significantly changed.
A computer and its software can move today from a tool to … a great assistant.
[Assistants] think. They learn about you. They understand what you value. They understand what’s important. They make decisions … They speculate about what might be interesting.

 

While in general I’m very enthusiastic about AI, reading this made me rethink – do I really want my computer to become an intelligent assistant?

I’m thinking about what Microsoft has achieved so far in this field (albeit it isn’t much), to try to picture where this is going: all those pesky MS Word features that supposedly study me, try to anticipate me, or speculate what I want. The annoying numbered lists that never work the way I want them to, the automatic indentations that never get it right, the pesky auto-text suggestions and the appallingly bad grammar checker. Thankfully, I can disable them. But would a Word with some better AI sprinkled on top be better at all these tasks? I think not.

The fact is, all those automatic suggestions are usually right on target! Nevertheless, those occasions when the software gets it wrong, be they few and far apart, annoy the hell out of me, to the point where I disregard the successes and focus on the failures.

But this bias still isn’t my main problem. The real problem is, I think, that I don’t want the computer to think, learn about me, or god forbid, make decisions. I want it to be a stupid machine that does what it’s told.

Human assistants, while they can study, anticipate and make decisions, frequently make mistakes in their anticipations and decision. However, we accept this as a fact of life, because they’re human. After all, to err is human. We also need to treat them as humans, anticipate them, care about their feelings and adjust our expectations. But who wants their software to be like that?

 

AI has great potential and endless applications. However, I am not in need of a more intelligent productivity software suite, to anticipate what I want. I am in need of better UI, that makes it easier for me to tell my software what I want.

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: