Hacker Newsnew | past | comments | ask | show | jobs | submit | skocznymroczny's commentslogin

Isn't that just Half-Lambert that Valve came up with for Half-Life?

"To soften the diffuse contribution from local lights, the dot product from the Lambertian model is scaled by ½, add ½ and squared. The result is that this dot product, which normally lies in the range of -1 to +1, is instead in the range of 0 to 1 and has a more pleasing falloff. "

https://developer.valvesoftware.com/wiki/Half_Lambert


It is. Takes me back to asking valve about this for my bachelor thesis 15 years ago and getting a personal reply from Gabe Newell who CC’d the developer who came up with the formula, him being all giggly whether I also want to know about skeletal animation that he came up with or the lip sync stuff. Fun guys

That SD 1.5 picture doesn't look like base SD 1.5. It's way too good, perhaps it was some kind of finetune like RealisticVision?


hrm. yea you're right. the page on fal used to produce it was linked with the image, but maybe i made a mistake and sloppily saved wrong one. ill have to reroll to check


Could be they are specifically training against it. There was some controversy about "studio ghibli style". Similarly how in the early days of Stable Diffusion "Greg Rutkowski style" was a very popular prompt to get a specific look. These days modern Stable Diffusion based models like SD 3 or FLUX mostly removed references to specific artists from their datasets.


If you think Grey Goo for just Starcraft with a new skin, check out Stormgate. They went so far to replicate almost all UI elements and put them in similar spots. With even things like the top ability bar which resembles Spear of Adun/coop commander interfaces in SC2.


I like to use these AI models for generating mockup screenshots of game. I can drop a "create a mockup screenshot of a steampunk 2D platformer in which you play as a robot" and it will give me some interesting screenshot. Then I can ask it to iterate on the style. Of course it's going to be broken in some ways and it's not even real pixel art, but it gives a good reference to quickly brainstorm some ideas.

Unfortunately I have to use ChatGPT for this, for some reason local models don't do well with such tasks. I don't know if it's just the extra prompting sauce that ChatGPT does or just diffusion models aren't well designed for these kind of tasks.


trebuchet is the solution


ktx is Khronos Texture a format for storing compressed textures (=image data) so that they can be uploaded to GPU memory without decompressing step inbetween

drc is Draco Compression, it's a library from Google to compress mesh data


KTX can just store compressed GPU textures as-is, but in this case they're using the Basis Universal codec which is a bit more involved. Basis stores textures in a super-compressed form, which is decompressed at runtime to produce a less compressed (but still compressed!) format that the GPU can work with. The advantage is that it's smaller over the wire and one source file can decompress to several different GPU formats depending on what the users hardware supports.


This is true. I am a graphics driver developer and part of my job is to debug rendering issues in games and we often encounter spec violations in the game. Especially so in the era of DirectX12/Vulkan explicit APIs which shift the responsibilities to the developer. Most common culprits are missing resource barriers (image transitions in Vulkan) and uninitialized resources (color/depth resources for the most part require explicit initialization). The thing about those issues is that they are likely to work on one vendor's GPUs, the one that the game developer is using, but break more or less catastrophically on another vendor's GPU.

Also it's not easy to get them fixed in the game, because of the realities of engine update cycles and things like console certification process. Unless you report the issue several months before release, there is a slim chance you will get a fix in the game.


So game companies release broken code and it somehow becomes the world's responsibility to fix it... Sometimes I wonder if simply allowing these things to break for once would be a good thing for the industry.


What do you mean by standalone renderers? There's Forge [0], which was used for Starfield. Also, there's nothing stopping you from taking a popular engine and using it as a renderer only. Remaster of Oblivion is running the original game code underneath and using Unreal Engine 5 for rendering. I assume Diablo 2 remaster did something similar because you can seamlessly switch between old and new graphics.

[0] https://theforge.dev/products/the-forge/


I mean yes you can do that, but it’s pretty hard since the engine expects you to work within its framework. Not saying it’s impossible of course, just annoying. Also, if I’m not mistaken, the forge is just a cross platform graphics wrapper? You still need to write a GLTF renderer and all that yourself if I’m not mistaken.


I miss the era of websites like flipcode (archives are still available https://www.flipcode.com/ ), people sharing screenshots of their OpenGL/DirectX engines. Nowadays making a 3D engine is more of a hobby, because you'll never catch up with the big ones.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: