Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a sense, speech is mind-reading: you can have in your mind what the writer had in their's.

This isn't just sophistry, but shows there are two problems, 1. to transmit information into and out of a mind; 2. to transform the information into a form that can be understood by another. A common language if you will.

This has analogues in relational databases, where the internal physical storage representation is transformed into a logical representation of relations, from which yet other relations may be transformed; and in integrating heterogeneous web services, where the particular XML or JSON format is the common language and the classes of the programs at the ends are the representation within each mind.

There's no reason to think that the internal representation within each of our minds is terribly similar. It will have some common characteristics, but will likely differ as much as different human languages - or as much as other parts of ourselves, such as our fingerprints. Otherwise, everyone would communicate with that, instead of inventing common languages.



>Otherwise, everyone would communicate with that, instead of inventing common languages.

How would we communicate with it? By directly linking our brains together? I don't see why it would have a direct translation into sounds.


You're right, that particular sentence is unnecessary to my argument and weakens it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: