Richard Gabriel coined a word for those things: compressed definitions.
When a thing looks like reuse or abstraction because it implicitly pulls in something else in a coupling way, it's really about exploiting shared context in a way that allows the new definition to be much briefer than it otherwise would have been. However, it still relies on full knowledge of the context, so in terms of conceptual load, there's no difference.
Just as with other types of compression, the shared context becomes the global coupling across definitions.
I have come across a codebase where there was an "Import data from HTTP" functionality that extended the "Import data from file" functionality in such a way that the HTTP calls were grafted onto and around the file operations. This probably made the HTTP class quicker to write, but
- Anyone wanting to make changes to the HTTP import functionality needed to also fully understand the file import functionality since basically all of it was used to import from HTTP, and
- Many changes to the file import functionality also broke usages of the HTTP import functionality. (Fortunately these breakages were always revealed by automated tests before they made it into master.)
Even that's a hard statement to make in such an unnuanced way. Oftentimes the whole point of an abstraction is to couple things together in a way that happens to be useful.
Take relational databases.
The relational model is a fantastic abstraction that revolutionized data storage and allowed databases to become vastly more powerful than they were prior to the introduction of the model. But it also couples the data representation to a model that's based on sets of tuples, which is a bit of a mixed blessing. It's necessary for the algebra that makes relational databases so flexible, but it also means that the RDBMS's native data format is fundamentally incompatible with how most application programming languages like to organize things, thus creating the need for a translation layer (read: API) to bridge the gap. That is, strictly speaking, a lot of extra work compared to using something like Gemstone, but I tend to think that it's a fair trade in large systems.
On the other hand, object/relational mapping often drives me nuts, because it introduces the wrong kinds of abstractions. It encourages tight coupling of the database's schema to the data model of the portion of the application that the person who created the table was working on at the time. This reduces the flexibility of the database, and may make it more difficult to predict or control the scope of impact of a schema change. Other methods like sprocs certainly have their problems, but at least they were trying to place the abstraction in a sensible place that doesn't create the kinds of couplings that make an application resistant to change.