Historically Microsoft has flown by the seat of its pants with a lot of this stuff. You wouldn't believe how horrible the Windows 95 or Windows XP interfaces were right up to the last possible minute.
As might be expected with anything "flashy" or "cool" in a company not exactly defined by those adjectives, there was a shortage of skill and an overage of opinions. Only after true "ship it" crush set in did the less qualified people seem to scurry away and worry about little details like, oh, an entire laptop product line that didn't boot.
For example: kernel dev Mark Lucovsky got some perverse thrill by checking in the startup bitmap. In his mind, this granted him some sort of final authority over the branding of a product that sells a half billion copies. We'd let him yell at the testers and people providing feedback that it looked like shit -- then we'd giggle while Dave Cutler punched holes in the walls in the build lab. Then with egg on their faces and casts on their hands, they'd go do something they were good at while the designers and a few hand-picked nerds possessing the barest hint of aesthetic capability tried to save face at the last minute.
The variable now is Sinofksy who unfortunately seems to think he really knows design. This is the guy who thought that removing items from menus "based on the user's work habits!" was a great simplifying metaphor in Office 2000. You'd look at a menu for months gradually building familiarity with the product. Then that one day when you finally needed to insert a friggin' table, the damn option wouldn't be there any more.
I'm sure he has all sorts of data proving how great everything is. I can't tell you how much data we had "proving" that the original Xbox controllers were way better than the Nintendo or Sony controllers, too.
Design has become much more important since WinXP and design is one of the pilars of Win8- Microsoft may not have taken it too seriously in the past, but it certainly is now.
The design of Win8 has been very meticulously planned- I don't think we'll see any significant changes at all.
I am curious though about how often they plan on updating platform apps such as mail- will we see significant improvements every 3-6mo, or will it be tied to the OS release schedule?
XP's interface was horrible after the last possible minute, too. Aero is great though, but, hilariously, from what I understand it's being thrown out in Windows 8.
My understanding is that it's only Aero Glass that's being canned, that being the smoked glass effect transparency on everything. Presumably it's a battery life thing, since even bargain basement onboard GPUs can handle it performance wise these days.
Having spent the better part of the last couple of months on a commercial windows 8 xaml app, the number of WTF moments per day was phenomenal. If the devs writing the OOTB software were anything like us, it's pretty easy to see why they're turned out crap. WinRT takes the best parts of .Net, XAML etc from the last 5 years and just ignores that it ever existed. The problem is that it tries oh so hard to present as if it's just an incremental change. It's not. Small things changed all over the place, no documentation on why, no progressively better software, because it's a totally new implementation of a bunch of things.
I recommend that people should treat Windows 8 metro as a 1.0 product (maybe do the standard wait for the service pack). It's worrying that it will RTM in 2 weeks time.
I haven't yet developed for Windows 8 and RT yet but having developed apps for the Windows scene since the 90s I have seen a lot of technologies come and go, and invariably -- and not having done the work you've done I may well be completely wrong here -- people will say it's not like the preceding technology stack and ipso facto is crap.
For instance: MFC/Straight-up GDI devs moving to VB/Delphi/Powerbuilder; Delphi users moving to .NET; .NET Winforms developers having to relearn everything when WPF came out; etc.
There are many places where it's really busted. For example, in WPF, you could pass a ListView a predicate to filter the visible items in it. That is mysteriously gone in WinRT. System.Xml.XPath is mysteriously gone. The list of omissions I've found is full of things like that. It's a very similar development model to WPF, there's just lots of pieces missing.
That being said, I would encourage anyone looking at developing a windows 8 app to strongly look at the html/javascript API for building apps. There's some oddities there too, but in my experience (which is admittedly limited) they are easier to deal with.
XPath functionality's gone? Hmm.. pretty massive oversight. Microsoft's not one to rest on their laurels when it concerns their cash cow though. I expect this'll all make a reappearance very soon enough.
Also, C# (and .NET) were lacking quite a bit in the early days as well.. now C# is pretty much a kitchen sink language.
In my opinion, though, that's precisely a part of the problem: every couple years, there's a new One True Way to develop for Windows. I understand that breaking with old development frameworks is sometimes a good and necessary thing to do, but the sheer frequency of non-incremental changes to Windows development methodologies seems problematic to me.
Microsoft keeps (us Windows) developers in business; it may not be great for the businesses who retain our services, but it sure is great for developers. And actually, I'm not entirely convinced that MS reinvents all that much more than, say, the OSS web developer community do.
The fact that it's RTMing in 2 weeks is irrelevant. What you're looking at (the release preview) was released May 31st, 2012. That means that before RTM hits, there will have been 2 months and a week or twos work put into it. While I think the apps won't be that clean in RTM yet, by the time GA rolls around I'm confident that they will be pretty good.
Anyone who has been in the software industry for a while now will have noticed the recent influx of people who I will refer to as "the Creatives".
Most of them are decent people, and I'll go for a beer with them any time. But what they bring to the software they're working on usually isn't beneficial.
Put bluntly, they care more about the look and aesthetics of software, rather than its usability and practicality. As these people have become more and more involved with software development over the past 5 to 7 years, we've started to see more and more problems arise.
Long-time Phoenix/Firebird/Firefox users will know what I mean. Each round of "improvements" to the UI by these creative types has in fact made it much worse. The usability has suffered greatly. Unfortunately, these people also apparently have great influence at Google and Apple, so alternate browsers like Chrome and Safari have UIs that are just as bad, if not worse in some ways.
My suspicion is that the same thing has happened at Microsoft, and Windows 8 and Metro are just this problem becoming extremely obvious on a wide scale. If Windows 8 and Metro are huge failures, as they very well may be, it will probably become clear that we need to ditch these UIs that focus on being pretty above all else.
The industry as a whole will need to return to simpler UIs and that are usable, functional and practical. We'll need to leave this dark period behind us, but hopefully we can learn from these mistakes to ensure future applications do not fall victim to the problems we are witnessing today.
The industry as a whole will need to return to simpler UIs and that are usable, functional and practical. We'll need to leave this dark period behind us, but hopefully we can learn from these mistakes to ensure future applications do not fall victim to the problems we are witnessing today.
Well, you have to qualify why Chrome's UI sucks, otherwise it's just an opinion. As far as I am concerned, it is useful plenty.
Indeed. Years ago, they took a bold choice to use one input box for both searching and url's and now it's one major reason why I use Chrome over Firefox and Safari.
Fun fact: In 1999, Mozilla (around its M11 release) introduced a "Search" button to the location field [0] which meant a unified input box. Firefox and its two-field setup is, absurdly enough, a later move. I can't point out when it happened, or if it was already there in the earliest days.. The single box is still there in Seamonkey, though, iirc.
Firefox's bar works exactly the same as Chrome's now. And even long before FF started letting you search from the address bar without search keywords, you could use search keywords.
Firefox's bar works exactly the same as Chrome's now.
Is it exactly the same? In Firefox, AFAIK you don't get every URL you type sent off to Google for logging as part of the instant search system; that only happens with things you type in the separate search box. In Chrome, I'm not sure whether it's the default for a new installation at the moment, but that capability is definitely there for anything you type in the (only) box.
I meant exactly the same as far as an average user is concerned - you never have to use the search box (I remove it), you can just put search terms in the main box and it'll use the configured provider.
I will admit FF is a little worse at confusing search terms for a URL on occasion, but that's a minor annoyance at best.
It wasn't the annoyance I was commenting on, but rather the privacy implications of combining the two bars when what you would type in one of them is typically sent off to some search engine somewhere in real time.
Safari in 10.8 works in a similar way as Chrome does with the address bar. Indeed it seems to be faster too, I'm toying with switching from Chrome. We shall see.
see, and I for one am glad that this happened and that the classic nerds with no visual taste whatsoever have been driven back to where they belong - the backend.
Using Chrome as an example is really, really funny.
How about GIMP? Every Linux distro before Ubuntu? Lotus Notes?
Yes, one thing has been lost though - that smug feeling of eliteness, being able to do something with a computer no normal user could. Like attaching a printer and having it print right away. Or being able to setup a photo album, import them from a bunch of devices and upload it to a safe off-site location.
Now grandma can do it and it sucks, right? She doesn't it even know how it works! Unfair!
I like UIs that have wow effect and shallow learning curve. But this fades off quickly - after you get used to it, you don't take any pleasure from the looks of the UI. What matters in the long term is efficiency and features.
If anything, Apple has shown us that design does matter and I feel like software (especially websites) is getting more usable over time. Your discontent over changing UIs seem like those of a long-time user upset by someone moving their cheese. I didn't like Chrome's spartan UI at first, but it won me over.
You have the basic Cocoa interface with accessibility features. You have OS X' default helpers, mostly the "Help" menu that allows you to search for menu items. You have AppleScriptability for lots of apps, even some third-party apps written by "designer" folks like Things. Then there is Automator. And below all that, you are free to do whatever you want from the Terminal.
I hear this argument all the time, but I don't get it. Besides software that 3rd parties choose not to write for OSX, what functionality is possible on Windows but impossible on OSX?
There's some truth to what you're saying. Gnome 3, Unity and Metro are simply worse desktop environments than their previous versions. They are aiming for touch-friendliness and for some reason trying to radically change what was working fine. Both of these are massive mistakes.
BTW Chrome's UI is almost perfect (including OS and mobile browser). The people behid it must be one of the best in the industry.
Usability and practicality is down for the likes of us. But for the vast majority of people, who used to despise computers, a whole new world has opened up. We are the minority here.
I'm not certain that's true. What I think has happened is that more people are living in ignorance of what their software could be doing.
Instead of the traditional rule of 80% of people using 20% of an application's functionality, we now have 95% of people using 3%. They aren't even aware that the other 97% exists, even if that functionality would be very useful to them.
Take Chrome, for instance. Much of the functionality is hidden away under a single menu, far on the right, that isn't even obviously a menu.
I've worked directly with a number of "average" people now who moved to Chrome because it was recommended to them by somebody else, but they didn't even know about its main menu. You wouldn't believe their surprise when they see that Chrome allows them to open multiple tabs or windows, or that they can zoom the page, or that they can print.
These aren't stupid people, either. They're accountants, lawyers, doctors, nurses, and other non-technical, but well-educated and intelligent, people. Some of them have been using computers for decades. But Chrome's poor UI design hid useful functionality from them, rather than making it obvious like traditional menus under a permanently-visible menu bar used to do.
Being unaware of Chrome's main menu, the users were also never aware of the keyboard shortcuts. Along with a highly-simplified toolbar, these prevented them from even realizing that Chrome could do anything beyond browsing a single web page at a time.
So while software today may be easier to use for typical people, I suspect that's only because it's giving them an experience that's extremely basic. These people will be very happy that Chrome's a fast web browser, but they'll silently be disappointed when they mistakenly think it isn't capable of printing a page, merely because the UI mislead them.
I've worked directly with a number of "average" people now who moved to Chrome because it was recommended to them by somebody else, but they didn't even know about its main menu. You wouldn't believe their surprise when they see that Chrome allows them to open multiple tabs or windows, or that they can zoom the page, or that they can print.
I'll suggest that these are not things "average" users do. They don't care about tabs and windows, they just deal with them when the site they are on opens them. If they need to print something, like an order confirmation on amazon.com, they use the "Print" button that's on the page. Or they might do a screenshot if they know how to do that. And then email it to themselves.
Man.. Metro/Windows 8 is like watching a train wreck in very slow motion. Everyone knows the hybrid desktop/tablet UI is a disaster and Microsoft out of a middle-management positive reinforcement loop of putting their heads in the sand keeps chugging along down the rails. Everyone who has used it and reviewed it so far has said it's a disjointed experience to have the start button go to a completely different UI with its own app ecosystem. It sucks more because personally I think the windows desktop OS is much better than Mac and Linux for a number of reasons (I spend time in each Mac/Linux daily for development.) Have you been to a computer store lately? There are tons of desktops with touch enabled displays, but they have only been gimmicks. There's no real demand for them because the usability cases aren't practical when you already have you hands on a mouse/keyboard. Touch = tablet UI. Mouse/keyboard = Desktop UI, and which UI is the majority of windows users? Anyways.. god dammit Microsoft, get off your high horse and fix windows 8 before you really start pissing people off with another ME/Vista release.
They're not going for upgrades on already existing windows customers computers, the main goal is to sell new tablets and touch enabled laptops and all in ones. There was a section of WPC 2012 where they showed off some of the new OEM devices and guess what, the majority of them were touch enabled.
I'm still not quite sure what to think about Windows 8 and the new Metro-style apps. My first (and somewhat lasting) reaction was that I didn't like the design, but I wasn't sure if it was because it was actually bad or if it was just new.
I think part of it is that the Metro style apps seem heavily slanted toward consumer/social apps, designed with a tablet in mind first. Utility/business apps don't seem to fit the Metro style as well, and now that I've seen some examples of such apps built with Metro, I feel inclined to hold off on porting my Mac app to it.
I hope I'm wrong, but it feels like in some ways, Microsoft is making the opposite (but similar) mistake they made with tablets in the past - this time, they are forcing a tablet paradigm onto the desktop.
I can't remember the last time I had to reboot my MacBook due to an unresponsive program. And I am by no means using the latest Apple hardware or software.
I am not an Apple fanboy (I used windows only up to a year ago), but the Apple user experience is orders ofagnitude greater than Windows' latest attempt. I suspect that the main difference stems from each company's design process. Apple has slowly built up all of their products and OSes, incrementally adding features only when they are rocksolid. Microsoft, especially lately with Windows 8, has sought entry into a new market (tablets/touch interfaces) and has hoped to skip all the intermidiate steps. This method is clearly inferior, and I won't be surprised when Windows 8 falls flat.
Please refrain from saying you're not an Apple fanboy, and in the next sentence stating that you're new to the Mac. You can be using Apple products for a week and be a fanboy. You can be a windows user (planning to buy Macs) and be a fanboy. Being a fanboy (= mindlessly defending the platform, no matter what) has nothing to do with longevity of which you're using that platform.
Pretty much everything the article complained about were clearly bugs. Yes, its buggy. Welcome to the world of pre-release software, the world I live in on a daily basis. Things tend to suck. But that's expected. The release preview was released quite a while ago, and as you can expect, applications weren't a main priority when they were ramping up to release these previews. Hence one of the reasons it says "App preview" when you open any of these apps.
The thing is, these applications can be updated ad-hoc through the stores application process (as many people may have seen with some of the first party apps). If I were you, I wouldn't expect them to be really polished and up to quality until soon before general availability.
But yeah I will agree with you on the fact that they're buggy, but complaining about bugs when you're doing a beta seems like a really stupid thing to do to me.
I can't emphasize enough how important it is for the OS vendor to "set a good example" when designing their own apps for the OS.
Like it or not, devs will look to the built-in apps for guidance on what to make theirs look like -- this is also true for how buggy or unresponsive an app can get before it is deemed "unacceptable".
Put another way, think of first-party apps as a form of enforced competition. If do don't do better than the level we, the OS vendor, are competing on, then users will be disappointed in you.
Since the Mail app is the protagonist in this piece, I will say that Apple's own Mail application was for a long time (and may still be for all I know) similarly dreadful.
In general, they tend to be slow, contain artefacts, crash, hang, lose responsiveness, or just flat-out refuse to do any networking operations.
This exactly describes (to the best of my recollection, I gave it one final chance in Tiger I think) Mail.app on Mac OS X.
Unfortunately we're also talking about a company who many are used to seeing as implementing incremental change, with a backwards compatible slant. Windows 8 is pretty far from this ideal for the dev experience.
As might be expected with anything "flashy" or "cool" in a company not exactly defined by those adjectives, there was a shortage of skill and an overage of opinions. Only after true "ship it" crush set in did the less qualified people seem to scurry away and worry about little details like, oh, an entire laptop product line that didn't boot.
For example: kernel dev Mark Lucovsky got some perverse thrill by checking in the startup bitmap. In his mind, this granted him some sort of final authority over the branding of a product that sells a half billion copies. We'd let him yell at the testers and people providing feedback that it looked like shit -- then we'd giggle while Dave Cutler punched holes in the walls in the build lab. Then with egg on their faces and casts on their hands, they'd go do something they were good at while the designers and a few hand-picked nerds possessing the barest hint of aesthetic capability tried to save face at the last minute.
The variable now is Sinofksy who unfortunately seems to think he really knows design. This is the guy who thought that removing items from menus "based on the user's work habits!" was a great simplifying metaphor in Office 2000. You'd look at a menu for months gradually building familiarity with the product. Then that one day when you finally needed to insert a friggin' table, the damn option wouldn't be there any more.
I'm sure he has all sorts of data proving how great everything is. I can't tell you how much data we had "proving" that the original Xbox controllers were way better than the Nintendo or Sony controllers, too.