> Amazon calculates a product's star rating using machine-learned models instead of a simple average.
lmao! i thought you linked me to an April fools page for a second .-.
"63% of reviews are 5 stars" but when you filter to 5 star reviews, there's zero. small indie company. i'm guessing this is all just an excuse for Amazon to inflate their numbers
Ratings != Reviews, you can leave a rating without leaving a review. The math still doesn't add up, but it looks like there are actually 4 five star ratings. How that makes 63% of a total of 12 reviews is beyond me.
writing 100% of the code on the gpu can render 10,000,000 triangles per frame at 60fps ... even in the web browser! (because there's no javascript running) https://www.youtube.com/watch?v=UNX4PR92BpI
but yes, that's cheating, since it's impractical to work with
you can write 100% of the code on the gpu. but that's impractical to work with. i did that here to see how fast webgl can go, since javascript is so slow https://www.youtube.com/watch?v=UNX4PR92BpI
for this bunnymark i have 1 VBO containing my 200k bunnies array (just positions). and 1 VBO containing just the 6 verts required to render a quad. turns out the VAO can just read from both of them like that. the processing is all on the CPU and just overwrites the bunnies VBO each frame
I think it can reduce input delay enough to change streaming gaming economics, but the current state of cloud economy makes it difficult to scale in practice.
i'm just starting learning directx and noticed it can render a triangle at 12,000 fps! i had no clue this was possible. i don't think there's any room for input delay there, but i'll find out
you're not missing anything, it's not impressive. i was just checking how fast computers are and sharing the results. my original title was "an optimized 2d game engine can render 200k sprites at 200fps" but the mods changed it to match my youtube title (which made it a lot more popular). and the fact it's written in jai isn't relevant, it's just what i happened to use
I figured it wasn't given that you were showcasing a GL project. But nonethees disappointing as someone curious as to whether or not the language helped in indirect ways with how you structured your project and if you feel you could scale it up to something closer to production ready. That did seem to be the goal of Jai when I last looked into its development some 4 years ago.
jai's irrelevant to the performance here, but it's very relevant to how easy this was to make. i'm not a systems programmer. i've tried writing hardware accelerated things like this in C++ but have failed to get anything to compile for years. the only reason i was able to get this working is because of jai. this is my first time successfully using openGL directly, outside of someone else's game engine
it's a lot of fun! jai is my intro to systems programming.
so i haven't tried this in C++ (actually i have tried a few times over the past few years but never successuflly).
this is just a test of opengl, C++ should be the same exact performance considering my cpu usage is only 7% while gpu usage is 80%.
but the process of writing it is infinitely better than C++, since i never got C++ to compile a hardware accelerated bunnymark.
That only applies if you are a known name (probably being known among his fans works too), or have somebody in his circle vouch for you.
Regular people don't get in.
over a year ago. I explained that I worked on game engines in college and they were terrible and overengineered and wildly inefficient and I wanted to do things better going forward.