As you comment, Freebase is bigger than Wikidata. It is 22GB compressed (250GB uncompressed) while Wikidata is 5GB compressed (49GB uncompressed) [1].
Said that, I believe the process described in the blog post is not loading the whole Wikidata dump into memory and it would work the same to process Freebase or even larger data dumps with your laptop.
From the post:
How Akka Streams can be used to process the Wikidata dump in parallel and using constant memory with just your laptop.
Biased reply (I'm a data scientist there): Common Crawl[1]. We build and maintain an open repository of web crawl data that can be accessed and analyzed by anyone completely free.
Wikidata is several magnitudes smaller than Freebase (closed by Google in May) and it won't fit in your RAM (laptop).