I worked on the Semantic Web. It has so many fatal flaws, that I am amazed in hindsight that I didn't see them back then.
Berners-Lee was successful with the Web because it was not an academic idea like Nelson's and Engelbart's hypertext, but it was a pragmatic technology (HTTP, HTML and a browser) that solved a very practical problem. The semantic web was a vague vision that started with a simplistic graph language specification (RDF) that didn't solve anything. All the tools for processing RDF were horrendous in complexity and performance and everything you could do with it could typically be solved easier with other means.
Then the AI-people of old came on board and introduced OWL, a turn for the worse. All the automatic inference and deduction stuff was totally non-scalable on even toy examples, let alone web scale. Humans in general are terrible in making formal ontologies, even many computer science students typically didn't really understand the cardinality stuff. And how it would bring us closer to Berners-Lee vision? No idea.
Of course, its basic assumptions about the openness, distributedness and democratric qualities of the Web also didn't hold up. It didn't help that the community is extremely stubborn and over confident. Still.They keep on convincing themselves it all is a big success and will point at vaguely similar but successful stories built on completely different technology as that they were right. I think this attitude and type of people in the W3C also has lead to the downfall of the W3C as the Web authority.
There are different flavors of OWL nowadays. Some of them are especially dedicated to reason on huge volumes of data (polynomial algorithms), although they are not very expressive. Some are more expressive, but don't scale very well. Some are incredibly expressive, but are undecidable, so you can only use them as a formal representation of a domain, not something you can reason from.
The practice in the community is to choose a fragment of OWL/description logic that fits your needs. Different tools for different uses. In practice I'm especially fond of the simplest languages, just a little more expressive than a database schema or an UML class diagram, as they are easy to describe things with and yet very useful, with lots of efficient algorithms to infer new things.
I could never really thought understand what it was going to do in specific terms, going from a "Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other" to some specific cases that seemed useful.
Spolsky had a great blog about this ki d of thing. CS people looking at napster, overemphasizing the peer-to-peer aspects and endeavouring to generalized it. Generalising is what science does, so the drive was there.
Generalising a solution is... It can lead you down path to solutuon-seeking problem. The web is also hard. Lots of chicken-egg problems to solve.
When TBL released www he had a browser, server and web pages that you could use right now. The "standards" existed for a non abstract reason.
On criticisms of W3C... idk. The have an almost impossible job. The world's biggest companies control browsers. Standards are very hard to change. Very hard network effect problems, people problems. Enourmous economic, political & ideological interests are impacted by their decisions.
You could say that they not have been involved with the project until it was much more mature and they could decide whether or not to include it. That said, if they were those sorts of people I stead of academic... I'm not sure if that's better.
Berners-Lee was successful with the Web because it was not an academic idea like Nelson's and Engelbart's hypertext, but it was a pragmatic technology (HTTP, HTML and a browser) that solved a very practical problem. The semantic web was a vague vision that started with a simplistic graph language specification (RDF) that didn't solve anything. All the tools for processing RDF were horrendous in complexity and performance and everything you could do with it could typically be solved easier with other means.
Then the AI-people of old came on board and introduced OWL, a turn for the worse. All the automatic inference and deduction stuff was totally non-scalable on even toy examples, let alone web scale. Humans in general are terrible in making formal ontologies, even many computer science students typically didn't really understand the cardinality stuff. And how it would bring us closer to Berners-Lee vision? No idea.
Of course, its basic assumptions about the openness, distributedness and democratric qualities of the Web also didn't hold up. It didn't help that the community is extremely stubborn and over confident. Still.They keep on convincing themselves it all is a big success and will point at vaguely similar but successful stories built on completely different technology as that they were right. I think this attitude and type of people in the W3C also has lead to the downfall of the W3C as the Web authority.