Hacker Newsnew | past | comments | ask | show | jobs | submit | ColinDabritz's commentslogin

Adding a single element from each other office would be a neat way to tie them together. For example, the new wave wall with a single lavalamp, hanging rainbow, and dual pendulum.


There's custom wallpaper in the office which features elements of the other offices, but this is a fun idea. We do have a double pendulum and lava lamp elsewhere in the office, just not on the wave wall.


"Kuszmaul remembers being surprised that a tool normally used to ensure privacy could confer other benefits."

It was surprising to me too! But reflecting on it more closely, most performance isn't about "faster" in a literal sense of "more instructions run per time", but about carefully choosing how to do less work. The security property here being "history independence" is also in a way stating "we don't need to, and literally cannot, do any work that tracks history".

It's definitely an interesting approach to performance, essentially using cryptography as a contraint to prevent more work. What properties do we need, and what properties can we ignore? The question becomes if we MUST ignore this property cryptographically, how does that affect the process and the related performance?

It certainly feels like it may be a useful perspective, a rigorous approach to performance that may be a path to more improvements in key cases.


I don't think what you're saying is accurate. Your statement would be correct if the measure of how slow the algorithm is is how much compute time it takes.

But the measurement they're actually using is how many books need to be moved. They're free to use infinite compute time AIUI.


I think the point is that having less information available can improve performance because it eliminates processing of extraneous data.

Said another way, stateless approaches are simpler than stateful. There can be an instinct to optimize by using more information, but in this case at least, that was a red herring and adding the constraint improved the results.


If there's no cap on computation time processing extraneous data is free.


Also if your algorithm is perfect and will never be misled by extraneous info.


That's a good insight. I had always thought the key to good algorithm / data structure design was to use all the information present in the data set. For example, if you know a list is sorted, you can use binary sort. But perhaps choosing how much of it to omit is key, too. It comes up less often, however. I can't think of a simple example.


> But perhaps choosing how much of it to omit is key, too. It comes up less often, however. I can't think of a simple example.

An example of that is a linked list with no particular sort order. By not caring about maintaining an order the insertion appends or preprends a node and is O(1).

As soon as you have to maintain any additional invariant, the insertion cost goes up. Either directly or amortized.


That's a great one, thank you!


A real world analogue is detective stories that work by adding irrelevant details and it’s up to the reader to filter down to the truly important details to solve the case.

To generalize it, if figuring out what information is important is more expensive than assuming none of it is, better to simplify.


So it's basically a matter of figuring out what problem context can and should be selectively hidden from the algorithm in order to make it work 'smarter' and not 'harder.' Weird.


The actual better algorithm does use history dependence though. So I found this part of the article misleading.


There was an example of this in the classic 'Duke Nukem 3d'. It had a level by Richard "Levelord" Gray, 'Lunatic Fringe'.

https://dukenukem.fandom.com/wiki/Lunatic_Fringe

This level had a circular hallway ring around the outside that had two full rotations around without intersecting, using the 'build' engine's ability to separate areas by their room connections that also drove the 'room over room' technology which was groundbreaking at the time.

It made for fun multiplayer, and the illusion held well there. The central chamber has 4 entrances/exits if I recall, and you would only encounter two of them in each loop around the outside.

I recall building a toy level while experimenting with the engine that "solves" the "3 houses with 3 utilities without crossing" puzzle using this trick as well.


In what sense is the Duke Nukem thing "an example of this"? The duke thing is an internally consistent programmed behaviour, this is... just random errors caused by a random change in the source code. Duke is maybe non-euclidean geometry, or something. This doom pi thing is... nothing to do with geometry. More an example of "garbage in, garbage out" maybe.


It's an example of "non-euclidean" space, and yes, it is a bit different than the article.


The DN example would be an instance of non-Euclidean topology whereas the other one is presumably non-Euclidean geometry.


> the 'room over room' technology which was groundbreaking at the time.

Bungie's Marathon from 1994 could also do this and demonstrated it in the deathmatch map 5-D Space[1]. It was really only Doom that bugged out on overlapping sectors.

[1] https://www.lhowon.org/level/marathon/30


I wrote a little engine with this capability a long time ago. I didn't know about the Build engine at the time. I divided the world into convex sectors and allowed arbitrary links (portals) instead of following a BSP tree. Rendering is front-to-back and clips at the portal boundary.

If you can rasterize the inside of a convex shape, you can rasterize a sector-portal-world by marking certain faces as portals, and when rendering one, setting the clipping region (or stencil buffer) to where the face would be, then rendering the sector behind it with an appropriate transformation (which is part of the portal's data, alongside the ID of the other sector).

Collisions through portals are much harder than rendering through portals.


Room over room (as done in later build games) usually does not require the sectors to overlap in this manner as it is a different thing (it is extension of how swimable water works, where the engine renders the other sector instead of floor/ceiling).

While Lunatic Fringe is pretty in-your-face example of impossible geometry in build, the duke3d maps contain many more cases of intersecting geometry. Obviously such things are impossible in Doom because there is no way to build BSP tree out of that and because doom only tracks X/Y coordinates of player(s)/monsters.


Sure you can do this in doom, or at least it can be done in modern source ports.

https://www.youtube.com/watch?v=Iq1-TZXz9xo (myhouse.wad)

There is nothing about the data structures ID choose to use in doom that prevent portal shenanigans. ID(John Carmack) just did not implement them.


Modern source ports are a totally different thing. If any, try to do what the former comment states with a Boom compatible engine as Crispy Doom or Chocolate Doom.


You're confusing things a bit here, Boom is already an enhanced port and it got the kind of line-to-line teleport that allows you to do this.

Crispy and Chocolate Doom are what the community calls "vanilla" Doom, with no enhancements at all.

That said, if you are willing to hex-edit the map data, you can have non-euclidean geometry in vanilla Doom: https://doomwiki.org/wiki/Linguortal


Crispy Doom will play most Boom compatible levels just fine. I tried it with some Megawads, and I had no issues, even with FreeDoom as the IWAD.

No, I'm not confused. To me, Boom it's pretty close to vanilla. Zdoom and the rest are pretty much another league, closer to Quake/HL than Doom.


No mention of the real Prey (2006) in a discussion of weird geometry in games?


I was truly amazed by this game! You don’t see anything like that elsewhere


The portals in prey were cool but, definitely the games Portal and Portal 2 are places you could see it elsewhere. Also in portal you can actually choose where the portals are, whereas Prey's portals were fixed in place


Is there no way of building a BSP out of it? I don't see why it has to be Euclidean to be partitioned, and loops are also possible (e.g. Deimos Lab). (The coordinates are definitely a blocker, so this question is academic.)


The “sectors” in the resulting level data have to be convex for the doom engine. The “asset pipeline” handles that by breaking up non-convex geometry into smaller convex sectors. So, there are no loops or holes in the actual level data, also from the cursory glance both of the large loops in Deimos Lab are actually not complete loops, but they have a place where the loop is broken. But that does not matter that much, as almost any level contains geometry that is either concave or has a hole (courtyard in E1M1, both large rooms in MAP01…)


The Doom design starts with X,Y,Z and then finds your BSP node. You can't warp to a different X,Y,Z without some type of warp portal which doesn't exist in Doom. Modern Doom forks have this feature and also have graphics portals so you can implement this (as MyHouse.pk3 does).


> only tracks X/Y coordinates of player(s)/monsters.

It does, otherwise Lost Souls and Cacodemons wouldn't able to fly.


I meant that as in contrast to build tracking the sector the thing is in. Due to that you can have two sectors that intersect not only in 2D, but even in 3D and the engine still does the right thing (ignoring the fact that the renderer gets slightly confused when there are overlapping sectors in the view)


Portals allow weird stuff (non linear geometry) in a BSP level. I thought Doom had petals.


The primary thing that BSP does is that it maps coordinates onto a BSP node/sector. Thus you cannot have overlapping geometry, as this mapping would not be unique. Quake has some idea of portals (I'm not sure about Doom), but it is used only as an additional layer of optimization, the engine is not fundamentally portal-based.


There is a very interesting Doom level named “myhouse.wad” that does a lot of clever things to seemingly allow room-over-room.


I don't think that mod works on a normal Doom engine.


There is an interesting VR game called "Tea For God" that puts you in your play area, and cleverly creates new corridors and rooms as you make it around the corner or lift so there is an illusion that you are in a very vast place, despite never leaving the same room, all without using the joystick or teleporting.


I remember a Descent PVP level that was a big room with a corridor that came off of one end and went to the other by looping back through the main room but was not visible inside of the main room. Was a bit of a mindbender in a game that already stretched the players spatial awareness.


This video does a good job explaining it, yes it really is quite mind-bending.

https://www.youtube.com/watch?v=UitzmhJe578


> the "3 houses with 3 utilities without crossing" puzzle

In graph theory's terminology, this is "K3,3", one of two irreducible nonplanar graphs. The other one is K5.

You can also make all the connections without crossing any edges if you embed the graph in a torus, which is equivalent to building a bridge over some set of edges that other edges are allowed to take.


I spent so much time back in the day working with build. In many ways it was super easy (and fun!) to use.

Thanks for the throwback!


> also drove the 'room over room' technology which was groundbreaking at the time.

Descent did this in 1994.


Descent was a polygonal game though, Duke3D was a raycasting engine. It was the first four a raycasting engine, and quite impressive.


Duke3D isn't a raycaster; it's a portal-based quad rasterizer. Wall quads are forced vertical, and floor quads are rasterized using a neat trick. The renderer computes successive lines of constant distance along slooped floor surfaces, then draws textures using affine scaling, which is correct along those lines. The portal walls clip the draw of successive sectors, which solves the hidden surface problem.

https://fabiensanglard.net/duke3d/build_engine_internals.php


I love that this is a common enough problem, that there's a full domain website for it:

https://0.30000000000000004.com/


That is a simple but clever technique. These sorts of approaches feel like they will be part of a more formalized software engineering profession.


There are a lot of crazy people in industry, unfortunately. Not as much at the big flashy tech startups or places you hear about in the news, but there are still plenty of 'work a day' businesses in corners of industry that do not have expertise in these areas that absolutely should.

I helped my current employer make this transition and it was similar to the description in the article. Controlling database change in source was foreign to them.


software bugs cause mass harm. We can't ask people to never make mistakes, but we can ask that they have appropriate practices, standards, quality controls, and care. Not taking appropriate measures to mitigate risks that can significantly affect millions of users is willful negligence, and should be a crime.


Not the op, but meaningful fines, executive jail time for gross negligence and especially for intentionally taking inappropriate risks, breaking up or closing companies that are shown over time to be unable to safely handle sensitive information. Proper regulation. Consequences that can't be cynically taken as the cost of doing business.


Jail time for bugs? Have people here every worked on products?

Bugs and security vulns are literally inevitable. Security is important but it this was the standard I'm not sure that any company would still exist.


Jail time for bugs that should have been preventible and caused harm to users. Mistakes and bugs happen, but we also have methods of mitigating them. Standards, quality controls, tests, analysis, and other care. I specifically said jail time for gross negligence because that means not taking care and allowing harm to users.

If you had an error that leaked private information, it's worth an investigation. If it made it through despite controls, that's understandable. If they find you failed to do analysis on the risk to users privacy, if you failed to have controls in place, if you didn't code review or test the code, then you have made specific choices that harmed users. That should be criminal.

We need to take software engineering seriously as a discipline. We have the potential to do more wide scale aggregate harm than any structural engineering collapse. We need to start acting like it.


What is "should have been preventable"? Mandatory continuous fuzzing of all apis? Interprocedural static analysis to detect all of the owasp top ten? Manual audits of all dependencies and transitive dependencies on every update? Hire world class auditors to manually inspect code?

I'm a huge security person. It's my job. But its unbelievably difficult to secure programs even if there are clear steps in hindsight that could have prevented a bug.


> What is "should have been preventable"? Mandatory continuous fuzzing of all apis? Interprocedural static analysis to detect all of the owasp top ten? ...

All of the above, possibly. Other engineering disciplines seem to have defined what constitutes due diligence just fine. This isn’t a novel problem.

It’s obviously not possible to make anything perfectly safe or perfectly secure. But it’s certainly possible to define a minimum amount of effort that must be put towards these goals in the form of best practices, required oversight, and paper trails.

Edit: Even “fuzzy” disciplines like law have standards for what constitutes malpractice or negligence when representing a client.


Nobody said jail time for bugs, and phrasing that way is intentionally obscuring the debate. Gross negligence is an entirely different standard than just software bugs.


Lots of people in this thread are explicitly saying jail time for bugs.

What evidence is there that this was gross negligence?


But why do financial services bugs garner a higher penalty than one that exposes private photos? This is an argument for regulation.


> Security is important but it this was the standard I'm not sure that any company would still exist.

This is true and it's also the reason why there are more software vulnerabilities than necessary. Software could be a lot more secure. There will always be bugs, but its is possible to build software and platforms with many fewer vulnerabilities. But it's expensive, so we don't, and users suffer the consequences while the companies shrug their shoulders and count their money.


For me, in Chrome, I could not find "Show In Viewer", but the static preview image on the left side has a "launch" text splash when hovering. Clicking it opens a player with play/pause/step and other functionality.


Have we actually seen the real interface? There was the first round, which was a mockup many were told was a 'screenshot', and a follow up that was a second mock up that was closer.

http://www.civilbeat.org/2018/01/hawaii-distributed-phony-im...

It's a common enough UI issue to be immediately clear to a professional how it happened though.


"HEMA can’t publicize the actual screen because of security concerns — the system could then then be vulnerable to hackers, Rapoza said."

The level of incompetence is astounding!


It's unreal.

> “We asked (Hawaii Emergency Management Agency) for a screenshot and that’s what they gave us,” Ige [Hawaii Governor] spokeswoman Jodi Leong told Civil Beat on Tuesday. “At no time did anybody tell me it wasn’t a screenshot.”

So the governor asked for a screenshot and they sent him a "mockup" instead of the actual interface?

I can only assume the actual interface was somehow even worse than the fake.


Given all of the incompetency here, including the awful UI, and the password on a sticky note that got leaked, I wouldn't be surprised if the links themselves leaked info like: "Send missile alert (confirmation password is hawaii1)".


Not claiming they are competent. However it may be the case that the real screenshot would be exactly as shown, but also includes few extra lines of "buttons" that have captions not meant for public audience. This ban may be coming from federal level. Or reveal they are using IE5 or something. Just a far-fetched theory.

However, I'd still stand with Peter here [1] and think they just could not get the "screenshot tool" installed to the machine.

[1] https://en.m.wikipedia.org/wiki/Peter_principle


Everyone is talking about interfaces and clicks. Yeah, we know he clicked the wrong item. I'd be interested to know if there was a paper manual sitting on his desk with procedure instructions for this type of situation and whether or not his mistake was either not following the procedure or following it incorrectly. For all we know, there could be a control to avoid a false alert, even given a shitty interface, that should have been followed.


Wow, that link just loads a solid white page if you have JS disabled. It even scrolls, presumably, across the actual length of the content.

Amazing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: