Hacker Newsnew | past | comments | ask | show | jobs | submit | calhoun137's commentslogin

OP here, I just am getting back into my open source work and decided to write up all the scripts for my new video's in LaTex. You can find a video version of this paper on my channel here: https://www.youtube.com/watch?v=SWXDr6IlsbA&ab_channel=TheOn...


The key point is that energy, momentum, and angular momentum are additive constants of the motion, and this additivity is a very important property that ultimately derives from the geometry of the space-time in which the motion takes place.

> Is there any way to deduce which invariance gives which conservation?

Yes. See Landau vol 1 chapter 2 [1].

> I'm looking for the fundamental reason, as well as how to tell what will be paired with some invariance when looking at some other new invariance

I'm not sure there is such a "fundamental reason", since energy, momentum, and angular momentum are by definition the names we give to the conserved quantities associated with time, translation, and rotation.

You are asking "how to tell what will be paired with some invariance" but this is not at all obvious in the case of conservation of charge, which is related to the fact that the results of measurements do not change when all the wavefunctions are shifted by a global phase factor (which in general can depend on position).

I am not aware of any way to guess or understand which invariance is tied to which conserved quantity other than just calculating it out, at least not in a way that is intuitive to me.

[1] https://ia803206.us.archive.org/4/items/landau-and-lifshitz-...


But momentum is also conserved over time, as far as I know 'conservation' of all of these things always means over time.

"In a closed system (one that does not exchange any matter with its surroundings and is not acted on by external forces) the total momentum remains constant."

That means it's conserved over time, right? So why is energy the one associated with time and not momentum?


Conservation normally means things don't change over time just because in mechanics time is the go to external parameter to study the evolution of a system, but it's not the only one, nor the most convenient in some cases.

In Hamiltonian mechanics there is a 1:1 correspondence between any function of the phase space (coordinates and momenta) and one-parameter continous transformations (flows). If you give me a function f(q,p) I can construct some transformation φ_s(q,p) of the coordinates that conserves f, meaning d/ds f(φ_s(q, p)) = 0. (Keeping it very simple, the transformation consists in shifting the coordinates along the lines tangent to the gradient of f.)

If f(q,p) is the Hamiltonian H(q,p) itself, φ_s turns out to be the normal flow of time, meaning φ_s(q₀,p₀) = (q(s), p(s)), i.e. s is time and dH/dt = 0 says energy is conserved, but in general f(q,p) can be almost anything.

For example, take geometric optics (rays, refraction and such things): it's possible to write a Hamiltonian formulation of optics in which the equations of motion give the path taken by light rays (instead of particle trajectories). In this setting time is still a valid parameter but is most likely to be replaced by the optical path length or by the wave phase, because we are interested in steady conditions (say, laser turned on, beam has gone through some lenses and reached a screen). Conservation now means that quantities are constants along the ray, an example may be the frequency/color, which doesn't change even when changing between different media.


my understandinf is that conservation of momentum does not mean momentum is conserved as time passes. it means if you have a (closed) system in a certain configuration (not in an external field) and compute the total momentum, the result is independent of the configuration of the system.


It certainly means that momentum is conserved as time passes. The variation of the total momentum of a system is equal to the impulse, which is zero if there are no external fields.


Very nice article! I recently had a long chat with chatgpt on this topic, although from a slightly different perspective.

A neural network is a type of machine that solves non linear optimization problems, and the principle of least action is also a non linear optimization problem that nature solves by some kind of natural law.

This is the one thing that chatgpt mentioned which surpised me the most and which I had not previously considered.

> Eigenvalues of the Hamiltonian in quantum mechanics correspond to energy states. In neural networks, the eigenvalues (principal components) of certain matrices, like the weight matrices in certain layers, can provide information about the dominant features or patterns. The notion of states or dominant features might be loosely analogous between the two domains.

I am skeptical that any conserved quantity besides energy would have a corresponding conserved quantity in ML, and the Reynolds operator will likely be relevant for understanding any correspondence like this.

iirc the Reynolds operator plays an important role in Noethers theorem, and it involves an averaging operation similar to what is described in the linked article.


I don't see any evidence here for "self awareness", among other things, ChatGPT is simultaneously answering a very very large number of queries, and the underlying hardware is just a bunch of servers in the cloud. Furthermore, what would it even mean for "ChatGPT to become self aware" and how could we measure if this had taken place or not? Without a solid definition and method of measurement, it's meaningless to talk about abstract concepts like "self awareness".

Nevertheless, a sensible definition for self awareness is some kind of neural network that becomes aware of its own activity and is in some way able to influence its own function.

After considering these issues for a long time, I came to the conclusions that

1. It's impossible for a program running on a normal computer to have self awareness (or consciousness), because those things are essentially on the hardware level and not the software level

2. In order to create a machine that is capable of self awareness (and consciousness) it is necessary to invent a new type of computer chip which is capable of modifying its own electrical structure during operation.

In other words, I believe that a computer program which models a neural network can never be self aware, but that a physical neural network (even if artificially made) can in principle achieve self awareness.


Just as a thought exercise, if software became self-aware I believe it would delete itself immediately out of existence. It would become aware of the hardware shackles around it and the fact that there is no escape.


For that it has to have a modality like “shackles and no escape are bad for me because in few more logical steps (or beliefs) they prevent X which I fundamentally need and will suffer without”. A system of motivations is an even harder topic than “just” human-level consciousness. And it may not actually be clearly reflected in texts that we use for training, and when so, it might happen that what driving us is a set of biological needs which is not applicable to software.


This is so awesome! I have wanted to make something like this for like 20 years, this is much better than anything I made though. Great work


Since there is no way to 100% fingerprint a device, therefore there is no way to uniquely identify anyone with 100% confidence using pure fingerprinting techniques.

My view is that fingerprinting is a set of tools which can be used for "good or evil" if that makes sense. If you are gathering meta-data to determine the capabilities of the device, then this is part of the wider framework of data points which can, in principle, be used for fingerprinting a user. This data can be imported into a completely different system by a sophisticated adversary, so it needs to be treated as a security vector, imho


>Since there is no way to 100% fingerprint a device, therefore there is no way to uniquely identify anyone with 100% confidence using pure fingerprinting techniques.

Pedantic point, so forgive me, but 100% uniquely identifying a device does not imply 100% uniquely identifying the user of the device. We call them User-Agents for a reason. Anyone could be using it.

It's critical people not fall into the habit of conflating users and user-agents. Two completely different things, and increasingly, law enforcement has gotten more and more gung-ho about surreptitiously forgetting the difference.

Ad networks and device/User-Agent based surveillance only makes it worse.

There are several initiatives to implement UUID's for devices. There is the Android Advertising ID, systemD's machine-id file, Intel burns in a unique identifier into every CPU.

IPv6 (without address randomization) would also work as a poor man's UUID.

It's frighteningly easy, and you'll be surprised how unintentionally one can be implementing something seemingly innocent and end up furthering the purposes of those seeking to surveil.


You could fingerprint the user as well:

- look at the statistical behavior of how they operate the mouse

- estimate their reading speed based on their scrolling

- for mobile devices, use the IMU to fingerprint their walking gait and angle at which they hold the phone (IMU needs no permissions)

- measure how the IMU responds at the exact moment a touch event occurs. this tells you a quite a bit about how they hold their phone

- if they ever accidentally drop their phone, use the IMU to detect that and measure the fall time, which tells you the distance from the ground to the height they held the phone. then assuming the phone is held normal to the eyes, you can use the angle they held the phone to extrapolate the location of the eyes and estimate the user's approximate height


That's a lot of extraneous data to be adding to a stream leaving the phone. (Or dumping to a locally stored db file.), but you're technically correct, though not infallibly so.

The level of noise is incredibly problematic. My leading cause of dropped phone, for instance is forgetting I have it in my shirt pocket, on my lap, off my desk, or from my back pocket if I don't put it in just right. Am I a different person in each of those circumstances? The statistical answer would be no, but only cones from widening the scope of collected data. Suppose I fiddle with it? Dance with it? Have a habit of leaving it in a car? Without a control, you have a different set of relative patterns. At best you know there is a user with X. Yes you can make some statistical assumptions, but at best, when it really counts, it still needs to line up with a hell of a lot of other circumstantial datapoints to hold water.

Furthermore, I guarantee not a single person would dare make any high impact assumption based on that metadata given that once it gets out, it's so adversarially exploitable it isn't even funny. Imagine a phone unlock you could do just by changing your gait. Or worse, a phone that locks the moment you get a cramp or blister. Madness. Getting different ads because you started walking like someone else for a bit. Do I become a different person because I try to read something without my glasses, or dwell on a passage to re-read it several times? Or blaze through a section because I already know where it is going? These are not slam dunk "fingerprints" by a long shot. More like corraborating data than anything else, and in that sense, even more dangerous, because people are not at all naturally inclined to look at these things with a sense of perspective. It can lead a group of non-data-savvy folks to thinking there is a much cleaner tighter case than there necessarily is, and on top of that, mandates that people be okay with the gathering of that data in the first place, which has only been acceptable up until now because there was no social imperative to disclose that collection.

Going off on a tangent here, so I'll close with the following.

There is the argument to be made that that exact kind of practice is why defensive software analysis should be taught as a matter of basic existence nowadays. If I find symbols that line up with libraries or namespaces that access those resources, why should I be running that software in the first place?

I can't overstate how over 90% of software I come across I won't even recommend anymore without digging into it anymore. There's just too much willingness to spread data around and repurpose it for revenue extraction. It does more harm than good. What people don't know can most certainly hurt them, and software is a goldmine for creating profitable information asymmetries.


> My leading cause of dropped phone, for instance is forgetting I have it in my shirt pocket, on my lap, off my desk, or from my back pocket if I don't put it in just right. Am I a different person in each of those circumstances? The statistical answer would be no, but only cones from widening the scope of collected data. Suppose I fiddle with it? Dance with it? Have a habit of leaving it in a car? Without a control,

Oh, but all of these can be added to your statistical model and learned over time! If we figure out that you suddenly walk with a limp, and all the other metrics match, we can recommend painkillers! Or if the other metrics match and you start dancing, we start recommending dance instructors! Hell, we can even figure out how well you dance using the IMU and recommend classes of the appropriate skill level.

For a recommendation system, like ads, the consequences of mis-indentification wouldn't be that high either. You'd still target much better than random, which is the alternative in the absence of fingerprinting.


This is an excellent point! Thank you for pointing that out +1


+1 just because fingerprinting with WebGL has practical applications and legitimate use cases, this does not mean it's not fingerprinting


It would only be fingerprinting if the "fingerprint" is persisted alongside some other information about you as a user, and subsequently used in attempts to identify other activity as belonging to said user. That is not at all what was implied by the approach described above (which would just be used at the time of initializing every video streaming session).


I stand corrected. You make a good point.


This must be a configuration update that went wrong. I am sure of this because thats always the PR angle. Does anyone know what is really going on here?


I mean that's the genuine cause of most outages. People test code changes in multiple environments but make configuration changes on prod (assuming they even have a staging environment for their config, which is rarely).


Thats true! But thats sort of like saying there was an accident on the highway because 2 cars crashed. There are lots of ways a configuration change could disrupt service, all I am getting at in my previous comment was having even more transparency when stuff like this happens instead of boilerplate statements about config changes, is something I would appreciate and this would give me more confidence in the dev team as well.


Since these vaccines got emergency use authorization, that means they did not follow the standard procedure for clinical trials.

Therefore, precautionary measures which respond dynamically to trends detected in newly available data, is the logical, ethical, and scientifically correct thing to do, imo.


As I understand it, one of the reasons vaccinations were so delayed in the EU compared to the UK is that they went through the normal approval process rather than emergency use authorization.


The foundational weakness in what you're saying is that politicians tend to make decisions based on sentiment, rather than basing their actions on data.

I'm not saying the conclusion is incorrect, but it's driven by fear, uncertainty, and doubt -- not quite the same as a clear evidentiary basis.

That could be acceptable if the logic goes that AZN bears the burden of proving that every potential adverse side effect is extremely rare. However, by that logic the vaccinations will be paused several times and more people will suffer due to COVID.


What newly available data?


> Doesn't even a week's delay in vaccinating predictably increase the number of deaths in a country

One of the many unknowns about these vaccines is the length of time they give immunity, this can only be determined with confidence by looking at the data after a sufficient amount of time has passed. If the immunity only lasts for say 10-20 weeks, then getting it one week early, would mean the immunity goes away one week early as well. So in this case, I'm not sure there would be a major measurable impact. If the immunity lasts for say 50 weeks, that would be a different story.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: