Hacker Newsnew | past | comments | ask | show | jobs | submit | unleaded's commentslogin

impressive... let's see the page source

>unless the purpose is specifically to have a retro effect where you eschew modern fonts for aesthetic purposes

There are better fonts for this too e.g. Fusion Pixel Font for CJK: https://github.com/TakWolf/fusion-pixel-font

(yes the readme is in chinese, use google translate or something)

i think i saw a good pixel font that supported arabic too once but of course i cant find it now..


remember when JLCPCB became popular a few years ago and completely flipped hobby electronics upside down? I don't know how possible it is but it would be really cool if that happens in a few years with semiconductors. it's kind of mad that they've dominated our lives since the 1970s but you can only make them if you're a large company with millions of dollars (or several years, a big garage and lots of equipment as seen here). or tiny tapeout.


It's not technologically feasible unless plastic aka flexible ICs take off.


Why?

It seems to me that if there were as much of a customer base for custom ICs as there is for PCBs, a fabricator like TSMC could easily offer a batch prototyping service on a 28 nm node, where you buy just a small slice of a wafer, provided you keep to some restrictive design and packaging rules.


They already do offer that - it’s called a multi-project wafer or MPW. But it’s prohibitively expensive on a per-chip basis. It’s mostly used for prototyping or concept proving and not for commercial use.

One problem is, you need to create a photolithography mask set for any volume size of fabrication and those aren’t cheap. But that’s far from the _only_ problem with small volume.



They should say on this page that this project has ended. There are some spinoffs people interested in this can look into:

https://tinytapeout.com/

https://wafer.space/

https://chipfoundry.io/


This is an absolutely vital development for our computing freedom. Billion dollar industrial fabs are single points of failure, they can be regulated, subverted, enshittified by market forces. We need the ability to make our own hardware at home, just like we can make our own freedom respecting software at home.


Still relevant today. Many problems people throw onto LLMs can be done more efficiently with text completion than begging a model 20x the size (and probably more than 20x the cost) to produce the right structured output. https://www.reddit.com/r/LocalLLaMA/comments/1859qry/is_anyo...


I used to work very heavily with local models and swore by text completion despite many people thinking it was insane that I would choose not to use a chat interface.

LLMs are designed for text completion and the chat interface is basically a fine-tuning hack to make prompting a natural form of text completion to have a more "intuitive" interface for the average user (I don't even want to think about how many AI "enthusiasts" don't really understand this).

But with open/local models in particular: each instruct/chat interface is slightly different. There are tools that help mitigate this, but the more you're working closely to the model the more likely you are to make a stupid mistake because you didn't understand some detail about how the instruct interface was fine tuned.

Once you accept that LLMs are "auto-complete on steroids" you can get much better results by programming the way they were naturally designed to work. It also helps a lot with prompt engineering because you can more easily understand what the models natural tendency is and work with that to generally get better results.

It's funny because a good chunk of my comments on HN these days are combating AI hype, but man are LLMs really fascinating to work with if you approach them with a bit more clear headed of a perspective.


Maybe? The loop process of try-fail-try-again-succeed is pretty powerful. Not sure how you get that purely with text completion.


Why would you do that when you could spend months building metadata and failing to tune prompts for a >100B parameter LLM? /s


it never has. see McCarthyism for instance


>Never

>Mentions one discrete event

Come on...



Someone has sort of done this:

https://www.reddit.com/r/LocalLLaMA/comments/1mvnmjo/my_llm_...

I doubt a better one would cost $200,000,000.


Every techie knows about Linux by now. Not everyone chooses to use Windows because they're foolish or don't know any better


why do they choose it?

i have a windows workstation because one CNC machine that we use needs it. only other reason i can see is gaming?

I have all 3 major OSs at home and, honestly, Windows 11 is stuff of nightmares to me


I've given some good reasons before: https://news.ycombinator.com/item?id=45858749

The "solutions" provided to me so far for my primary issue (using Ableton Suite DAW) has not worked. There is no practical solution that allows this software to function in a Linux environment successfully. I can open the app, but that's the extent of it. It's not usable.

> I so badly want to jump ship entirely, but there's several things holding me back. I do music production as a hobby and Ableton Live doesn't play nice with Linux. In fact it seems anything that is resource intensive without native linux support has some issues. I'm also an MS stack developer, so things like Visual Studio Pro aren't available (although I've been using Cursor IDE more and more these days). Lastly I have some games acquired through "the high seas" in which a work-around doesn't exist for compatibility.

> The responses I got were to switch to different software. No, no, and no. I paid a lot of money for Ableton Suite and poured many many hours into learning how to use it; it's the DAW I prefer to use, I don't want to switch.

> Having said this, I did try to dual boot recently with Linux Mint, and once again ran into headaches getting my Logitech mouse buttons to work.


Adobe products, for example. Or any of other of miriad of other products which have only Win/MacOS and no Linux support.

And, no, Wine cannot run anything.

You see, I don't need OS at all, I need applications. Some of these applications are "universal" (FireFox, for example), some has good equivalents, and some are unique to OS.

And, no, DarkTable, or RAW Therappe are not equivalent to Lightroom or Capture One. And no, there is no equivalent to foobar2000 among music players.


>And, no, Wine cannot run anything.

Wine may not be able to run the apps you need, but it can run plenty. The older the software gets the more wine becomes the only option to run it.


MPD + advanced clients pown foobar 2000 anytime. Also, Audacious, Strayberry...

Audacious with audacious-plugins could play anything (even video game music files) and it still has ProjectM plugins' support.


Nope, UI for mpd shows that there is 11093 albums in my collection, but first several screens of Albums is all sequences of `?`. Very useful. Number itself doesn't looks right, my estimation is at least half of this number, maybe less.

On the other hand same client shows only 6391 files, which is waaaay to small number if 1 file = 1 track. Ok, there is a lot of image + CUE albums, I wonder, is it 2 files or one?

So it is useless, unfortunately. foobar2000 allows me add folder / file set to playlist and start listening. With system "Artist/Year - Album" on the file system it is easy and convenient. Tags could be broken, but all mys music is here and I always know where to look for what I want to listen now.


When I've tried MPD last time (about 2 years ago, to be honest) it failed to play wv.iso format, and I have this abomination in my collection.

Also, it is not very good with broken tags, MP3 tags in local codepages (different for different albums!), etc.

You cannot imagine what can be seen in the wild when it is musical collection started in 1995!

Heck, I'm downloading mpd for windows right now and I'll try to add my collection into it. But I'm not holding my breath, all previous attempts to import my collection in any software failed for 15-20% of collection (different ones for different software).


You can run nearly any Windows app with winboat. Its not based on wine, it runs real windows in a container.


One reason is that Linux has no backwards compatibility and to maintain each piece of software in the repos, you need people. It is linear: more software requires more maintainers, otherwise the software stops to compile in a year or two.


Creative Cloud and DAWs. Those are my only reasons and basically the only reasons I ever hear from people. A Linux port of Photoshop would probably put a small dent in Windows' market share at this point.


Windows architecture is better. It is from the 1990s (ad was very advanced at the time), while Linux architecture is from the 1960s.


my secret plan to get HN users into vtubers by making a worse version of live2d with machine learning for no reason is going to make me millions


what do you mean by "heavily structured output"? i find it generates the most natural-sounding output of any of the LLMs—cuts straight to the answer with natural sounding prose (except when sometimes it decides to use chat-gpt style output with its emoji headings for no reason). I've only used it on kimi.com though, wondering what you're seeing.


Yeah, by "structured" I mean how it wants to do ChatGPT-style output with headings and emoji and lists and stuff. And the punchy style of K2 0905 as shown in the fiction example in the linked article is what I really dislike. K2 Thinking's output in that example seems a lot more natural.

I'd be totally on board if cut straight to the answer with natural sounding prose, as you described, but for whatever reason that has not been my experience.


From what I've heard, Kimi K2 0905 was a major downgrade for writing.

So, when you hear people recommend Kimi K2 for writing, it's likely that they recommend the first release, 0711, and not the 0905 update.


Ohhh, thanks, that's really good to know. I'll have to give that one a shot.


Interesting. As others have noted, it has a cut straight to the point non-psychophantic style that I find exceptionally rich in detailey and impressive. But it sounds like you're saying an earlier version was even better.


Again, it's just what I've heard, but the way I've heard it described is: they must have fine tuned 0905 on way too many ChatGPT traces.


> I find it generates the most natural-sounding output of any of the LLMs

Curious, does it do as well/natural as claude 3.5/3.6 sonnet? That was imo the most "human" an AI has ever sounded. (Gemini 2.5 pro is a distant second, and chatgpt is way behind imo.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: