Hacker Newsnew | past | comments | ask | show | jobs | submit | cjwebb's commentslogin

What kinds of things do you think has to change with game engines?

Sharding would have to happen to handle geographic distance between people, wouldn't it?

I’m curious to hear your thoughts.


You can still do things like client-side prediction with streaming, but imagine if you controlled all of the clients and could always trust them. You could have the actual simulation running in a big bunker in Ohio somewhere on entirely synchronous terms (<1ms node-to-node), with a bunch of "player" nodes geo-distributed talking to the shared simulation and operating with CSP.

The game concept itself can be engineered around vertical scale constraints. For instance, imagine flight routes in WoW. If the cost to take a flight was modulated by the system load in the desired target region, you could create a much more seamless way to make this work. Think about real-world flight routes and system demand... The determinism goes down a lot for the technology owner, but the effect is much more compelling to the end customer.


I covered this in my PhD thesis, please check it out and I'd love to hear your thoughts as someone working on this (relatively lonely) problem! https://yousefamar.com/memo/notes/my/phd/

From what you've written so far, while the tech is cool, I'm not sure you're trying to solve a problem that actually exists. This is coming from someone who is bullish on both cloud gaming as well as decentralised architectures for gaming (esp VR) but for other reasons.

You seem to consider updating and rendering millions of players client-side to be a bottleneck; it's not -- human perception bottlenecks kick in long before that. Players can't -- and don't want to -- see all other players that are in the same area. And if they're not, then sharding by area is a much easier solution.

Even if it were a bottleneck, the solution is not to offload that rendering to cloud GPUs, it's many well-established LOD tricks. Game servers don't send millisecond updates of every player in a world to a client, the bandwidth they end up using for good UX is almost always lower than streaming video. You said networks keep getting faster, true, but client GPUs keep getting better and better too. Shard-less horizontal scaling sever-side has already been done btw (see SpatialOS (https://ims.improbable.io/products/spatialos) or WorldQL (https://github.com/WorldQL/mammoth)) or the many proprietary implementations (think Fortnite, Roblox, etc) but I don't think it's as useful as we think it is.

Besides this, I can tell you for a fact that "we have more players than we can handle" is an extremely rare thing actually, even on launch days, even though it might not seem like it to us. The biggest problem for games is getting players in the first place. The ones that do tend to have big budgets to spend on their servers, and it's usually not budgeting that's the bottleneck, but bad planning.

I would be curious to know if you've validated this idea with any studios, and if so, how? Or is the 10-20 hrs mainly focused on building?


Your thesis sounds very interesting. I'll give this a look.

> I'm not sure you're trying to solve a problem that actually exists.

This is more about creating the next generation of problems. I want to enable future experiences that HN would believe are comically-infeasible right now. The greater the disbelief the better.

I do have a very specific game concept and roadmap in mind. I've not talked to any studios or investors yet. I don't intend to until I've published the first game entirely on my own.

The hard part is all the art in a massive world. My current title is constrained to deal with this reality. I'll definitely need outside help on the real deal. Not wasting anyones time until I'm certain it's going to work.


> but imagine if you controlled all of the clients and could always trust them

This will eliminate a few checks like teleporting/wall passing etc but you still can't trust the client, rest will shift to ML based cheating methods:

https://www.youtube.com/watch?v=DlsBaQWfE58


True - you always have to deal with the final frontier. There are still very powerful statistical/hybrid systems that can be employed to mitigate cheaters.

You can combine many factors to close the last mile. Getting privileged information off the client machine is a huge part of the battle. The rest can be dealt with using clever tricks, stats, etc.

For example, imagine a game where you are using this ML aimbot to lock onto players heads. The developer could design a ramping detection system like:

  1. Statistical detection - outlier in performance. Begin deeper analysis.
  2. Review player inputs using our own ml models to determine likelihood.
  3. Escalate to active measures - in-game canaries to bait the aimbot into very unlikely, inhuman responses.


Interesting. Can’t think of any faults in your logic!

Are you actively working on this?


Yes - Right now it's entirely side-project mode, but I am averaging about 10-20 hours per week.


What would motivate people? I think healthcare, as in your example, would actually see an influx of people motivated to help others. Not everyone is lazy and would do nothing. Plenty of people work in healthcare today for less money than they would get paid in other industries primarily because they value helping others.


Like, this kind of thing? https://voxon.co/

I had to Google it, but since I've recently been diving into volumetric video, the words made me curious.

How's it going? Any particular area you're working on?


Yes, like Voxon and Lumi - swept volume (rather than static, hence 'weird mechanical setups'). It's basically the problem of taking any 2D display of reasonable resolution and moving or spinning it through the 3rd dimension at least 50 times a second. For example, you could spin one or more 32x64 LED panels (like these [9]) around a vertical axis at 3000RPM.

Currently working on both spinning and reciprocating ('flapping') approaches; spinning is nicer in many ways but reciprocating gives a better result.

Have a look at articles tagged 'volumetric-display' on Hackaday [1] for examples.

  [0] https://www.adafruit.com/product/5036

  [1] https://hackaday.com/tag/volumetric-display/


Time to organise the HN Oxford Meetup! :)


I'm in!


He was clearly a smart person.

I do wonder though, if the reason we won’t see his likes again is because he was truly a one-off, or that his particular environment enabled him to show his excellence.

I’ve been reading “The Idea Factory”, about Bell Labs, recently.


Environment definitely plays a big role. It’s a balance of environment and individual capabilities. When assessing intellectual contributions, I think people tend to underestimate environmental and structural factors (were you working at a place like bell labs, did you go to an elite school, etc.) and overestimate the “innate” abilities or gifts of the individual.

If you look at intellectual history, almost every genius worked in an environment in which they were surrounded by equally brilliant minds, or had ample correspondence with other thinkers of their time (of course there are exceptions).


If I could upvote this again, I would.


Are there any good guides anywhere you'd recommend for hardware startups - and how to reduce this $30K upfront cost? That is a lot of money.


The math I did was pretty basic. BOM*quantity + mold cost. Those are sort of the basic knobs you can fiddle with. Not all molds are equal. I was just talking with someone last week who was going to 3d print one of the housings for a component he's designing, since the quantities are low, but found a molding technique that's low quality but simple, and he said it was maybe an order of magnitude cheaper than anything I was familiar with, so learning about molding processes can allow you to design products with a cheaper upfront cost there. I think he said this mold he was looking at was like on the order of hundreds of dollars, instead of thousands of dollars that I typically expect.

BOM reduction can be tricky. Lowering your BOM makes more sense the larger your production runs, but when selecting components I tend to sort by price, then find the cheapest component that satisfies my needs. Of course, a more expensive component may allow you to skip other related circuitry, giving a cheaper overall build, so diving into datasheets is important.

Quantity is the other thing you have some control over, but lower quantity batches have higher per item costs. Setting up a pick and place for a single board takes the same amount of time as setting one up for a larger run. If your quantities are low enough eventually setup fees are likely to start being a bigger percentage. Quantity also effects BOM costs. You can easily pay 2x as much for a component at low volumes, so you may not actually save as much as you think you will.

I agree with you that 30k isn't cheap. But if you look at things historically, we've finally reached a price point for hardware, that you don't need to be a big business to even think about having consumer quality electronics. Apple started with kits 40 years ago and took investor money pretty early if I recall my history correctly. Today I expect it to be easier to bootstrap a hardware company since there's more infrastructure around bootstrapping. I've seen successful products that won't do a production run until they have a certain amount of preorders. But hardware is just always going to be fundamentally more expensive than software


Search for successful hardware kickstarter postmortems (is it still called post mortem if it succeeded?). I think Bunnie might have done one for novena.


That rings true for me, but with a slight adjustment; once you have fill your brain with knowledge you can then condense it to the simplest model, from which the rest can be derived if needed again.


You could rent clothes for a day...

Wake up. Clothes have been delivered overnight. Put them on, wear for the day. At the end of the day, put them in a box outside wherever you're staying the night.

That seems like a feasible thing to build today, if you ignored all the current consequences of fast-fashion.


If you're worried about the BBC running on AWS, how do you feel about government departments with more sensitive data, like the Home Office, and HMRC, using it?


I personally wouldn't feel great about it.

Do they?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: