Hacker Newsnew | past | comments | ask | show | jobs | submit | edsu's commentslogin

This makes me wonder if someone is putting the latest version of the Factbook on Gopher now. It might be a fun little project?

PS. Lagrange is a beautiful piece of software.


Yes, apart from the loss of the publication as a historical artifact, it is the loss of the continuing process that kept it up to date as a representation of the present (with whatever flaws you always have with such representations).

The suggestion that obvious propaganda is somehow better than "subtle" propaganda is itself propaganda.

Obvious propaganda plays a role in the destruction of a shared objective reality, which is part of the authoritarian playbook. Subtle propaganda distorts reality but preserves the notion of a shared objective one and does not intend to undermine trust.

When a government uses blatant, easily disproven lies, but doubles down on the lies and continues with increasingly absurd ones, there is no space for subtlety or trustworthy sources in that government.


It works remarkably well there too. Thanks CIA for making a website that is (was) easy to archive.

I kind of like the association since it speaks to how text collected while browsing the web can be used to generate new text, which is similar, at least metaphorically, to how human memory is reconstructive and transformative, not perfect recall. https://en.wikipedia.org/wiki/Reconstructive_memory


Also check out https://archiveweb.page which is open source, local, and lets you export archived data as WARC (ISO 28500). You can embed archives in web pages using their Web Component https://replayweb.page.


Does anyone know what PrivateGPT is using for its local model, and where it came from?

Update:

Answering my own question it looks like it uses llamacpp in local mode? https://github.com/imartinez/privateGPT/blob/main/private_gp...


In her book ‘Discriminating Data‘ (2021), Wendy Chun reveals how polarization is a goal — not an error — within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Hito Steyerl and Wendy Chun will discuss how can people release themselves from the vice-like grip of discriminatory data and consider alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks.


If you work as a researcher you might be able to apply for Academic Research Product Track access, which gives you access to the full archive of tweets back to 2006.


Is there a good listing of these, or could we crowdsource one quiclky to make sure they are at Internet Archive?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: