Hacker Newsnew | past | comments | ask | show | jobs | submit | farzher's commentslogin

Faster than the LuaJIT's interpreter... with the JIT disabled. would've been nice to make that more clear.


the answers are more absurd than the questions


> Amazon calculates a product's star rating using machine-learned models instead of a simple average.

lmao! i thought you linked me to an April fools page for a second .-.

"63% of reviews are 5 stars" but when you filter to 5 star reviews, there's zero. small indie company. i'm guessing this is all just an excuse for Amazon to inflate their numbers


Ratings != Reviews, you can leave a rating without leaving a review. The math still doesn't add up, but it looks like there are actually 4 five star ratings. How that makes 63% of a total of 12 reviews is beyond me.


anyone have an explanation for this? this isn't an issue of fake/bot reviews.

why does Amazon say the product is 4/5 stars even though no reviews are saying that... ?

https://i.imgur.com/vTOiujc.png


writing 100% of the code on the gpu can render 10,000,000 triangles per frame at 60fps ... even in the web browser! (because there's no javascript running) https://www.youtube.com/watch?v=UNX4PR92BpI

but yes, that's cheating, since it's impractical to work with


you can write 100% of the code on the gpu. but that's impractical to work with. i did that here to see how fast webgl can go, since javascript is so slow https://www.youtube.com/watch?v=UNX4PR92BpI

for this bunnymark i have 1 VBO containing my 200k bunnies array (just positions). and 1 VBO containing just the 6 verts required to render a quad. turns out the VAO can just read from both of them like that. the processing is all on the CPU and just overwrites the bunnies VBO each frame


> One fun thing I discovered is just how low latency a pure CPU rasterizer can be compared to a full CPU-GPU pipeline

i'm definitely going to have to test that! always trying to minimize input delay


I think it can reduce input delay enough to change streaming gaming economics, but the current state of cloud economy makes it difficult to scale in practice.


i'm just starting learning directx and noticed it can render a triangle at 12,000 fps! i had no clue this was possible. i don't think there's any room for input delay there, but i'll find out


you're not missing anything, it's not impressive. i was just checking how fast computers are and sharing the results. my original title was "an optimized 2d game engine can render 200k sprites at 200fps" but the mods changed it to match my youtube title (which made it a lot more popular). and the fact it's written in jai isn't relevant, it's just what i happened to use


> and the fact it's written in jai isn't relevant

I figured it wasn't given that you were showcasing a GL project. But nonethees disappointing as someone curious as to whether or not the language helped in indirect ways with how you structured your project and if you feel you could scale it up to something closer to production ready. That did seem to be the goal of Jai when I last looked into its development some 4 years ago.


jai's irrelevant to the performance here, but it's very relevant to how easy this was to make. i'm not a systems programmer. i've tried writing hardware accelerated things like this in C++ but have failed to get anything to compile for years. the only reason i was able to get this working is because of jai. this is my first time successfully using openGL directly, outside of someone else's game engine


it's a lot of fun! jai is my intro to systems programming. so i haven't tried this in C++ (actually i have tried a few times over the past few years but never successuflly).

this is just a test of opengl, C++ should be the same exact performance considering my cpu usage is only 7% while gpu usage is 80%. but the process of writing it is infinitely better than C++, since i never got C++ to compile a hardware accelerated bunnymark.

the only bunnymarks i'm aware of are slow https://www.reddit.com/r/Kha/comments/8hjupc/how_the_heck_is...

which is why i wrote this, to see how fast it could go.


I thought Jai wasn't released yet. Are you a beta user or did he release it already?


It isn't released. That said, from people I know, it seems like you can just ask nicely and show some interest and he'll let you try it out.


That only applies if you are a known name (probably being known among his fans works too), or have somebody in his circle vouch for you. Regular people don't get in.


this is untrue. (source: firsthand)


Curious. Did that change more recently? When did you enter?


over a year ago. I explained that I worked on game engines in college and they were terrible and overengineered and wildly inefficient and I wanted to do things better going forward.


the official rendering modules are a bit all over the place atm... did you use Simp, Render, GL, or handle the rendering yourself?


just used raw GL calls from #import "GL". although i did #import "Simp" as well for Simp.Texture and Simp.Shader, which Simplified things quite a bit


i finally got around to writing an opengl "bunnymark" to check how fast computers are.

i got 200k sprites at 200fps on a 1070 (while recording). i'm not sure anyone could survive that many vampires


that many rabbits, it's frightening!

Do you have the code somewhere, I would like to see how it's made?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: