Hacker Newsnew | past | comments | ask | show | jobs | submit | _ache_'s commentslogin

I agree, but that is nothing new. The original SaaS was already proprietary.

And btw, the "reverse engineering" close was already here too. You can check the archive.org of Jan 2025, months before the Qualcomm acquisition.

https://web.archive.org/web/20250120145427/https://www.ardui...

So this citation, is basically fake news and FUD. The *now* part is false and this hide the fact that the "platform" is only the SaaS.

> Phillip Torrone had warned [...] Arduino’s users were now “explicitly forbidden from reverse engineering or even attempting to understand how the platform works unless Arduino gives permission.”


I feel like the Qualcomm thing has just woken up a lot of people to how Arduino has been enshittifying for years

They released their first closed source "pro" boards in 2021

https://blog.adafruit.com/2023/07/12/when-open-becomes-opaqu...


This can fool someone from one location and only in one way (if you are near Somalia and expect a 10ms latency, a virtual VPN can't reduce latency to simulate been in Somalia). So it have to be dynamic to fool multiple locations to stay probable.

But anyway, *you can't fool the last-hop latency* (unless you control it, but you can control all of it), and basically it impossible to fool that.


Yeah... I come here to talk about that. Should have been

  for i in range(0, 2**8, 2):
      print("    if (number == "+str(i)+")")
      print("        printf(\"even\\n\");")
      print("    if (number == "+str(i + 1)+")")
      print("        printf(\"odd\\n\");")
or

  for i in range(0, 2**8, 2):
      print(f"""    if (number == {i})
          puts("even");
      if (number == {i + 1})
          puts("odd");""")

What happens when you try to compute 2**8+1 ?

If its too large you could just subtract 2*8 and try again.

Should work fine with long long?

I really like SVG, I did a lot of things with it and some interesting ones. The only blame I have is that it is sometime slow.

Like for QR Code, precise maps or +100 pixels wide squares. More than 100 "DOM" elements and it will take multiple seconds to show.

The animations also are slow too, compared to canvas, plain CSS or Lottie but nothing very cursed, it's mostly fine.


I embedded a chess engine in SVG image of a chess board (https://github.com/jnykopp/svg-embedded-chess) so that the engine moved the pieces automatically and played against itself, just by viewing the SVG.

This was done for a friend of mine who made an art installation that projected like some 50x20 (can’t remember exactly) of these images in a grid on a wall, for perpetual chess madness.

The number of chess SVGs a laptop’s browser was able to run simultaneously did feel suprisingly low, but luckily it was enough for that particular piece of art.


interesting -- is there any video of the art installation

Sadly, seems there is not. But the artist has still the web page up he used for the installation: https://heikkihumberg.com/chess/

He said he used ipads as renderers. And even one grid may have looked different back in the day than that page now, as the font might be different. The SVG just uses system fonts and the chess pieces are just unicode characters.


Cool thanks.

Is there a way to control the speed. When I load a single SVG into browser, it runs through the whole game in a flash. (Edge shows animation; chrome and firefox show static image for me)


There are three timeouts defined in the SVG / embedded javascript code, on lines 66-68 (https://github.com/jnykopp/svg-embedded-chess/blob/a24249729...)

You can increase COMP_MOVE_TIMEOUT (which is now 1 millisecond) to, say, 100 milliseconds.

RESET TIMEOUT defines how long the game is paused after game is finished to let the viewer to see the result, and NEW_GAME_START_TIMEOUT defines how long to wait before doing the first move when a new game is started.

The static image may be because of some browser security mechanisms; served as raw from GitHub the SVG is not animated for me either on Firefox, but when I download the SVG and view it from local drive in Firefox, it works. (It did work when served from GitHub at some point in history, though.)


Thanks for the details!

Is embedding intelligent logic inside of SVGs for animation a common thing -- feels very novel to me. Kudos for the idea and execution!

I am wondering if it is possible to push it even further and bring more and more creative logic -- say to create some unique patterns / designs etc that render differently each time. Say a swirling ripples animation that keeps changing over time but never feels like it is "pre-recorded".

Also, can animated SVGs be embedded in powerpoint and the like -- so we get crisp vector animated design elements in a compact portable format?

I do worry that this can also open some possible attacks -- malicious URLs in a dynamically generated QR, for example.


Yeah; I've built a map viewer in SVG+JS for my small browser game, and it works quite well for that purpose, but when I tried to repurpose the underlying code for a different game, with a much higher object density, it became quite unmanageably slow. (I rebuilt the map for that game using canvas, but it does lose me some functionality.)

Looks very fake. Self published (Anima-Core is NOT a journal), no academic anteriority, very strong statement, no peer-review, no public history of technical skills. Did I mention the use of Github via the interface only?

At the same time, possible since it's only classification tasks. I mean, the method explained is technically plausible, a lot of people thought about it, we were just unable to find a method to do so.

Very unlikely true, unfortunately.


Did you not see the author's note about being an outsider to academia? Not everyone has the background to pull all that off. This is an earnest attempt to come as close as possible and they even invite feedback that would help it become a real academic submission.

No, it's a waste of time.

I mean, the process should have been to contact some local academics to discuss the mater. If I say it works (or it doesn't) I'm adding near nothing to the claim, as I'm not an academic myself.

Big claims like this need clear and solid work. Here it just looks like LLM generated.


Have you run the walk-through to reproduce? They provide a highly detailed step by step document. They welcome raising an issue if reproduction doesn't yield the claimed results within 2%.

It's OK to call out fake claims. But it requires going through the process if such is reasonable, it just seems to take a couple of hours to find out.


The fake claim here is compression. The results in the repo are likely real, but they're done by running the full transformer teacher model every time. This doesn't achieve anything novel.

That's not how the method works... The full transformer is only needed once to extract the activation fields. That step can even be done offline. Then the teacher can be discarded entirely. The compression result refers to the size of the learned field representation and the small student head that operates directly on it. Simple. No fake claim there. Inference with the student does not involve the transformer at all.

If you look at the student-only scripts in the repo, those runs never load the teacher. That's the novel part.


I agree the claim is (perhaps purposefully) confusing.

What they achieved is to create tiny student models. Trained on specific set of input. Off the teacher model's output.

There is clearly novelty in the method and what it achieve. Whether what it achieve would cover many cases that's another question.


Can you please share the relevant code that has the training of such a tiny student model that can operate independently of the big teacher model after training? The repository has no such code.

> You can't know 200 people, but you can know 10 people who each know 10 people

You are still 100 people short to know 200 people, but I got the idea.

The 100 people limit is already know by most of teachers. Having more than 3 classes, it is mostly impossible (very hard) to have a "deep" follow up of each student. Having more than 6 classes and it is strictly impossible to follow them even in the best conditions.


I agree, I laugh out loud at that pun.


Thank you for the technical write up. I have no expertise in the BTE area but it's clear enough for me to understand.

The swap pattern is very interesting but even if it's silly, maybe experimenting with an actual camera to detect cameras may give you a good base line to what to expect from a working Rayban banner.


I don't particularly like Zig, actually I don't like the language. But I have to admit, it's a bold move, free software projects should be encouraged to do the same.


Even a 5090 can handle that. You have to use multiple GPUs.

So the only option will be [klein] on a single GPU... maybe? Since we don't have much information.


As far as I know, no open-weights image gen tech supports multi-GPU workflows except in the trivial sense that you can generate two images in parallel. The model either fits into the VRAM of a single card or it doesn’t. A 5ish-bit quantization of a 32Gw model would be usable by owners of 24GB cards, and very likely someone will create one.


> Even a 5090 can handle that. You have to use multiple GPUs.

It takes about 40GB with the fp8 version fully loaded, but ComfyUI can (at reduced speed), with enough system RAM available, partially load models in VRAM during inference and swap at need (the NVidia page linked in the BFL announcement specifically highlights NVidia working with ComfyUI to improve this existing capacity specifically to enable Flux.2) to run on systems with too little VRAM to fully load the model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: