About the cost of calling an async function synchronously:
You had a call to read(). Now you have a thread, an event loop, a pooling mechanism, and a synchronization with the thread. That's much more system calls and cpu cycles, for an synchronous async read()
Can you explain how you run an async function synchronously, and how it has no overhead compared to calling a synchronous implementation of the same function ?
As far as the kernel-side implementation goes, IO is always asynchronous. The CPU is not involved in the actual movement of data between memory and the network interface.
When you make a synchronous syscall, the kernel initiates the operation, saves the state of your thread, and starts another one. When the network interface is done, it signals the kernel, which then marks your thread as runnable and schedules it for execution.
When you make an asynchronous syscall, the kernel initiates the operation but does not block your thread. This is usually done in the context of an event loop, which makes a synchronous syscall (like epoll_wait) when it runs out of tasks to run.
Thus, converting a single async syscall to a sync one means two syscalls: initiate the operation, then wait for its result. The extra round trip between user and kernel mode is basically free in this case because you're blocking on IO, and any logic it implements has to happen in the synchronous case anyway.
You had a call to read(). Now you have a thread, an event loop, a pooling mechanism, and a synchronization with the thread. That's much more system calls and cpu cycles, for an synchronous async read()