Hacker Newsnew | past | comments | ask | show | jobs | submit | fouc's commentslogin

I guess that means you don't live in the US, or in the UK, or in Australia.

Correct

He meant with X & web browser and so on. The QNX disk had gui + browser and a few other gui apps.

No I meant the base system. A system with X would take at least 20 floppies or so with Slackware 3. The whole setup was 80 floppies in total.

I’m sure it’s better now, it wasn’t so when QNX had come out.


Oh huh, I forgot about that. I guess I downloaded X after the base system was installed.

Photon microGUI was included in that, and it blew my mind that you could literally kill and restart Photon without disturbing any of the GUI apps that were still running.

They also mailed a manual along with the demo disk, and I was amazed that QNX had built-in network bonding, amongst lots of other neat features. At the the time I was using Slackware & the linux kernel version was still 1.x, I don't think bonding came to linux until 2.x?


new utility command coming soon! wdtci - "what does this curl install?"

Depends on dtps - "does this program stop".

A lot of the responses seem to be blaming the user/learner and requiring them to change their mindset/attitude, which is actually an insane take.

As you pointed out, SRS isn't the full solution.

BTW, I would say that language classes often try to maintain a constant level of difficulty, but there is usually some kind of coverage of the previous material too.


Pretty good, I've noticed the animation tends to veer off / hallucinate quite a lot near the end. It is clear that the model is not maintaining any awareness of the first image. I wonder if there's a way to keep the original model in the context, or add original image back in at the half way mark.

Thank you. I've noticed that too, and also that it has a tendency to introduce garbled text when not given a prompt (or a short one).

This is using the default parameters for the ComfyUI workflow (including a negative prompt written in Chinese), so there is a lot of room for adjustments.


Oh I was wondering why some of the hallucinations introduced Chinese text/visuals, I'm guessing that might be due to the negative prompt.

I think the main reason is that the model has a lot of training material with Chinese text in it (I'm assuming, since the research group who released it is from China), but having the negative prompt in Chinese might also play a role.

What I've found interesting so far is that sometimes the image plays a big part in the final video, but other times it gets discarded almost immediately after the first few frames. It really depends on the prompt, so prompt engineering is (at least for this model) even more important than I expected. I'm now thinking of adding a 'system' positive prompt and appending the user prompt to it.


Would be interesting to see how much a good "system"/server-side prompt could improve things. I noticed some animations kept the same sketch style even without specifying that in the prompt.

Could do something funky like convert it to grayscale, add a 4th "colour" channel and put the grayscale image it that

I'm actually trying to reduce the 'funkyness', initially the idea was to start from a child's sketch and bring it to life (so kids can safely use it as part of an exhibit at an art festival) :)

There's a world of possibilities though, I hadn't even thought of combining color channels.


I think they were suggesting that it might be possible to inject the initial sketch into every image/frame such that the model will see it but not the end user. Like a form of steganography which might potentially improve the ability of the model to match the original style of the sketch.

That would require a browser that supports WebP

...which is, like, all of them released over the past 5 years: https://caniuse.com/webp

Does this work on a browser that doesn't support WebP? That would be useful.

The title for your submission should be a Show HN:

Interesting point about the problem of HN's depth-first comment structure. It feels like perhaps HN should collapse sub-comments by default to encourage more breadth. Perhaps collapse by default at the 2nd level.

Your approach to solving that is an interesting UX idea but even with the basic understanding the comments are sorted by their levels, it wasn't very obvious how to navigate the app. I kinda expected to be able to easily scroll across all level 1 comments (whether scrolling vertically or horizontally).

I was also surprised that the first comment was already pre-selected, and that level 2 comments were only for the selected comment. You could require new users to click on a comment first before expanding to show the level 2 comments, gives them an opportunity to learn the relationship between the two.


Appreciate the feedback! Agreed that on HN, even collapsing at L2 would be a significant improvement.

I'll keep brainstorming with the UX for the next version. I appreciate your perspective on how all the data disclosed on first load might be disorienting with unclear relationships. I'll play with either some sort of animation cue, colored or bordered visual indicators, or a default null selection as you've suggested.


Oof, that felt too real. I'm half torn making that a reality before someone else does.

That's often the thing about these torment nexuses, they're somehow profitable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: