Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone know any technical details behind this paragraph? Specifically, are they talking about a new kind of interconnect technology with low power over ~1m distance?

(Searching for "rackspace virtual I/O" was not so useful.)

"Rackspace is leading an effort to build a “virtual I/O” protocol, which would allow companies to physically separate various parts of today’s servers. You could have your CPUs in one enclosure, for instance, your memory in another, and your network cards in a third. This would let you, say, upgrade your CPUs without touching other parts of the traditional system. “DRAM doesn’t [change] as fast as CPUs,” Frankovsky says. “Wouldn’t it be cool if you could actually disaggregate the CPUs from the DRAM complex?”"



I don't think this would be good at all for most real workloads - you'd be taking the performance hit of having high-latency memory at all times. Even most hardcore NUMA vendors try to keep DRAM CPU-local, and writing high-performance software for NUMA generally involves ensuring that your data stays close to your CPU. Otherwise missing a branch or getting preempted by another task which flushes your cache lines becomes really, really expensive.

I do think this could be useful for a memcached workload, though, in tandem with some smaller amount of fast CPU local memory - you could basically share "memory bricks" between CPUs, and swap CPUs out independently without evicting an entire system's worth of memcache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: