You bring up some very interesting points here Jeff. However in Sweden , the stores having Self-Checkout have made it a defactor-standard to have someone standing by the machine at all time.
Especially at Ikea, every self-service machine got 1 employee. Imagine if every developer got their own personal helper for understanding purposes and other valueable information. The outcome would totaly differ.
There are no long quees because as soon as there is a problem, you have the employee there to help you, maybe you get this problem, 1, 2 or three times But, after the third time, you know what you are doing wrong. Then the slef-checkout-system works perfectly.
Since the special cases you pointed out, where the self-checkout system fails, is taken care of by just, teaching the users how to interact with the product.
I did a discrete event simulation on self-service checkout back in 2004. For all cart sizes and item numbers, waiting in a line of 3 people or less will be quicker than checking yourself out at the grocery store. Why? Trust.
The store allows its employees to scan items very quickly because they trust the employee to scan all items and move all items towards the bag. The store does not allow customers to scan items very quickly for the opposite logic. You must first put all your groceries on the scale to the left of the machine, and then after scanning, put your groceries on the scale to the right of the machine. This process adds time.
So what makes the difference to us at customers? Waiting with nothing to do. We would rather take longer to check out because we are active members of the check out process then idle in line and read crappy magazines, albeit juicy celebrity gossip.
The problem with self checkout on a busy day is that people are SLOW! Even the slower store checkers are still 10 times faster then people trying to figure out the self-check.
I agree with Rob - anyone try to use the self serve checkout at Home Depot? In my experience, most human to machine interfaces are poorly designed. It takes time to program the human in the way the machines interface is exposed. That’s why store cashiers will always be faster operating the machine then we will ever be.
I am not interested in trying to learn how to use someone else’s poorly designed human to machine interface, which includes the self serve checkouts. I came to the store to buy shit, not learn how to use the abomination of a self serve checkout system, I get enough of that already as a software programmer – which I get paid to do.
On the complexity thing, there’s still this funny balance between man and machine. How’s the saying go? … To err is human, but to really mess it up it takes software.
I think the great divide is still pattern matching. People are still the best pattern matchers, though there’s been some interesting ground there.
I think the cutting question is always about intrinsic value vs. market value. Just because we can, when should we? That’s where intrinsic value comes into play. Just because we can, why will we … or won’t we? That’s where market value comes into play.
I’d say what inhibits my desire to contribute to OO.o more than anything else is the advent of Google docs. I haven’t used OO.o once since Google docs went live.
-Greg
Ok, can someone please explain to me the logic behind the scales in the bagging area? I mean, what’s this supposed to achieve? If I was going to steal something, I wouldn’t bother scanning it, would I? And the area is only small, so if you’ve got more than a couple of bags worth, it’s a pain to juggle things around, when you could just be loading up a bag in your hand.
I find it hard to believe that a programmer thought this up
Why would anybody waste their time downloading, using, coding for OO, when you can use Google Documents and Spreadsheets for nothing. For most regular users, it has all you need. Also, Google has copied the look and feel of Office 2003, which I think most users prefer to the hideous 2007 Ribbon interface.
I’m not sure I get it, but yes, I also think stagnation is just natural evolution for Open Source.
I had to start thinking that way after dealing with open source for about 10 years.
In the long run, it always failed to meet my requirements.
First time I ran The GIMP, it was absolutely working great, each version added more and more usful stuff. Then suddendly it went dead, and now I have a newer version doing this and that BUT feels ankward for my just-above-mspaint needs. I do not perceive it as much powerful nowadays.
Last was inkscape. Great vector stuff for my very basic needs. New version added a plethora of bugs and the few new features are of little use, some seem to be not standard SVG and cannot be properly used in a real, collaborative work.
OO.o is just the same. There were some improvements not-so-recently but not even close to the amount of things it should have been improved. Since last Office version really pissed me off, I’m somewhat happier with it but come on…
And I could say the same for X and the linux kernel (pretending your vidcard to be just a dumb framebuffer has some retro feel).
Other things open source failed to produce: a decent AutoCAD clone, a proper 3D engine as integrated as commercial ones…
Proper system documentation…
Quoting:
Kill the ossified, paralysed and gerrymandered political system in OpenOffice.org.
This is troublesome for opensource in general. I generally agree with Xepol.
I’ve spoken with a kernel contributor about the state of the graphics infrastructure, it gave me about 2 minutes. It then spoke with another guy about the good ol’ days DEC Alpha was DA BOMB for almost half an hour. It was at least 10 years I didn’t see an alpha-based system on consumer market (it is my understanding they’re out of production since a few years).
Nowadays Win32 has moved in the direction of managing GPUs as processors. Apple is possibly ahead. OpenSource is not… but… when some mantainers figured out IPC was too slow to be used in some crypto apps, they didn’t fix it… they slapped it to the kernel ASAP, surely they didn’t want to work with the ugly mess that are packages (as the generic distribution is basically still stuck to libc and some kind of shell).
So we see this… strange thing in which some contributions are scrutinized in each subtle detail and implication, while some others seem to just go thuru a fast lane. Maybe this doesn’t happen, but feels like it’s happening and it obviously kills the contributions.
Ok, can someone please explain to me the logic behind the scales in the bagging area? I mean, what’s this supposed to achieve? If I was going to steal something, I wouldn’t bother scanning it, would I? And the area is only small, so if you’ve got more than a couple of bags worth, it’s a pain to juggle things around, when you could just be loading up a bag in your hand.
Come to think of it, if you scan something and then don’t put it in the bagging area, you’re not stealing it. If someone wanted to bag a stolen item, then they’d put it in the bagging area without scanning it, which can easily be distinguished from scanning something without putting it in the bagging area. Halting the process when the machine detects an increase in weight without a scan makes sense. Halting the process when the machine detects a scan without an increase in weight is completely retarded.
Also, why are one or more of the self-checkouts routinely switched off? Do they break down that much? Because unless it’s broken, there’s no reason in the world to have one turned off, any more than you’d regularly leave your Web server switched off for any reason other than a breakdown or upgrade.
I like the article. I agree with open source project needing to make it easy for developers…however there are C++ developers in this world, even some that know Java and C# as well. Not so hard to pick up languages once you’ve been through
Jeff, you really need to learn C/C++. Not so you can program with it…so you can learn the supporting lessons it teaches you. I am always interested in learning new things as I’m sure you are too. Maybe you should help out open office as a learning experience.
My question is, why the hell do you want to use a text writer to do text layout? Or text formatting? At work, I use InDesign, InCopy, and notepad for text work. I type it up in notepad, copy/paste to InCopy for formatting, and then check-in to InDesign for layout work. Much more intuitive, especially for graphics-heavy documents - IMNSHO, office applications of any stripe are horrible with graphic elements.
The REAL issue with F/OSS, which a few comments have hinted at in passing, is what developers seem to forget about behind the F/OSS flag of Get the Source and Fix It Yourself Faster than Proprietary - you’re putting maintenance and bug fixes on your customers. While this might work for enterprises that can afford to hire developers to write and maintain this software, the premise falls flat on its face once you reach the consumer market.
That’s why “wrapper” corporations, like Red Hat, are successful. Selling FOSS is not an oxymoron. That’s also why Vim and Emacs are so successful: do you know any NON-programmer that uses them, or any (*NIX) programmer that HASN’T?
On the other hand, programmers are more likely to contribute to what they use. So, if I don’t use a project (like GNOME, for example), I wouldn’t contribute to it. OpenOffice, as so many have noticed, suffers from this.