For credits to not be considered a money substitute, they must be non-transferable, non-refundable, and have an expiration date. Without an expiration date, unused credits cannot be accounted for as revenue, but as a liquid asset similar to cash.
Best practice is to set a long expiration date, such as 1-2 years. There are different regulations about it in different states. After that unused credits can be accounted as breakage revenue.
If a company treats credits as money, it will have to comply with numerous financial regulations. For example, if a company compensates for SLA breaches with cash rather than credits, this could be considered insurance.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
> Although, one can use either discrete or continuous time to simulate a complex economic system.
A very bad model that lacks accuracy and precision, yes. Maybe if you're a PhD quant at Citadel then you can create a very small statistical edge when gambling on an economic system. There's no analytic solution to complex economic systems in practice. It's just noise and various ways of validating efficient market hypothesis.
Also, because of heteroskedasticity and volatility clustering, using time-based bars (e.g. change over a fixed interval of time) is not ideal in modeling. Sampling with entropy bars like volume imbalance bars, instead of time bars, gives you superior statistical properties, since information arrives in the market at irregular times. Sampling by time is never the best way to simulate/gamble on a market. Information is the casual variable, not time. Some periods of time have very little information relative to other periods of time. In modeling, you want to smooth out information independently of time.
I think this is pretty in-the-weeds compared to the original thread:
> Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Understanding the general tendencies of economic systems over time (e.g. “the rate of profit tends to fall”) is much more abstract than attempting to win at economics using time-based analysis.
It started that way. I was responding to the assumption that time is the underlying variable of all economics. Then someone said everything reduces to time and they brought up Black-Scholes, a quant tool to price options. I didn't bring it up lol. My point is simply no, time is demonstrably not fundamental at all.
Edit: an LLM thinks I'm overly dismissive of:
- Standard economic modeling
- Dynamic macroeconomic theory
- Agent-based economics
- The legitimate uses of time in economics
This is true. I think causal inference in finance and economics is difficult. As Ludwig von Mises argued, mathematical models give spurious precision when applied to purposeful behavior. Academic ideas don't have a built-in feedback loop like in quant finance.
I use symbolic links, and Claude Code often gets confused, requiring several iterations to understand that the CLAUDE.md file is actually a symbolic link to AGENTS.md, and that these are not two different, duplicate files
The recommended approach has the advantage of separating information specific to Claude Code, but I think that in the long run, Anthropic will have to adopt the AGENTS.md format
Also, when using separate files, memories will be written to CLAUDE.md, and periodic triaging will be required: deciding what to leave there and what to move to AGENTS.md
> Why do you need exceptions at all? They’re just a different return types in disguise…
You don’t need exceptions, and they can be replaced by more intricate return types.
OTOH, for the intended use case for signalling conditions that most code directly calling a function does not expect and cannot do anything about, unchecked exceptions reduce code clutter (checked exceptions are isomorphic to "more intricate return types"), at the expense of making the potential error cases less visible.
Whether this tradeoff is a net benefit is somewhat subjective and, IMO, highly situational. but if (unchecked) exceptions are available, you can always convert any encountered in your code into return values by way of handlers (and conversely you can also do the opposite), whereas if they aren’t available, you have no choice.
Correct, but that's not how I think about systems.
Most problems stem from poor PL semantics[1] and badly designed stdlibs/APIs.
For exogenous errors, Let It Crash, and let the layer above deal with it, i.e., Erlang/OTP-style.
For endogenous errors, simply use control flow based on return values/types (or algebraic type systems with exhaustive type checking). For simple cases, something like Railway Oriented Programming.
It's a domain specific answer, even ignoring the 0/0 case.
And also even ignoring the "which side of the limit are you coming from?" where "a" and/or "b" might be negative. (Is it positive infinity or negative infinity? The sign of "a" alone doesn't tell you the answer)
Because sometimes the question is like "how many things per box if there's N boxes"? Your answer isn't infinity, it's an invalid answer altogether.
The limit of 1/x or -1/x might be infinity (or negative infinity), and in some cases that might be what you want. But sometimes it's not.
Division by zero is mathematically undefined. So two's complement integer division by zero is always undefined.
For floating point there is the interesting property that 0 is signed due to its signed magnitude representation. Mathematically 0 is not signed but in floating point signed magnitude representation, "+0" is equivalent to lim x->0+ x and "-0" is equivalent to lim x->0- x.
This is the only situation where a floating point division by "zero" makes mathematical sense, where a finite number divided by a signed zero will return a signed +/-Inf, and a 0/0 will return a NaN.
Why should 0/0 return a NaN instead of Inf? Because lim x->0 4x/x = 4, NOT Inf.
> According to the IEEE 754 standard, floating-point division by zero is not an error but results in special values: positive infinity, negative infinity, or Not a Number (NaN). The specific result depends on the numerator
Way back when during my EE course days, we had like a whole semester devoted to weird edge cases like this, and spent month on ieee754 (precision loss, Nan, divide by zero, etc)
When you took an ieee754 divide by zero value as gospel and put it in the context of a voltage divisor that is always negative or zero, getting a positive infinity value out of divide by zero was very wrong, in the sense of "flip the switch and oh shit there's the magic smoke". The solution was a custom divide function that would know the context, and yield negative infinity (or some placeholder value). It was a contrived example for EE lab, but the lesson was - sometimes the standard is wrong and you will cause problems if it's blindly followed.
But IEEE 754 works as you described in your last comment. It doesn't take the numerator's sign. So what's wrong?
Can you give more context on your voltage math? Was the numerator sometimes negative? If the problem is that your divisor calculation sometimes resulted in positive zero, that doesn't sound like the standard being wrong without more info.
> But IEEE 754 works as you described in your last comment. It doesn't take the numerator's sign. So what's wrong?
The numerator was always positive. The denominator was always negative (negative voltage is a pretty common thing), except when it became zero. That led to surprising behavior.
Right the whole point of the exercise was that sometimes the standard is wrong for your specific problem at hand. We spent lecture after lecture going over exactly how ieee754 precision loss worked, and other edge cases, so we could know how to exactly follow the standard.
Then we had an example where the sudden sign flip from a/-0.00000000001 = <huge_negative_number> to a/0 = <positive_infinity> would cause big problems with a calculation. If you didn't explicitly handle the divide by zero case and do the "correct for domain, but not following ieee754 standard" way, then you'd fry a component.
It's been a long time so I don't remember the exact setup, just the higher level lesson of "don't blindly follow standards and assume you don't need to check edge cases (exception or otherwise) because the standard does things a certain way".
It's a good lesson in defensiveness! But if your value was always either less than zero or negative zero it would have done the right thing, both domain correct and standard correct. It's hard to say exactly why you got positive zero, but my bet is that it's more subtle than the standard doing something you can actually call "wrong".
Yea that's totally fair, you'd need to build it in as a first class behavior of your code, doesn't necessarily mean that exceptions is the right way to do it.
Unchecked exceptions are more like a shutdown event, which can be intercepted at any point along the call stack, which is useful and not like a return type.
Debugging. It's one of the most useful tools for narrowing down where an error is coming from and by far the biggest negative of Rust's Result-type error handling in my experience (panics can of course give a callstack but because of the value-based error being most commonly used this often is far away from the actual error).
(it is in principle possible to construct such a stack, potentially with more context, with a Result type, but I don't know of any way to do so that doesn't sacrifice a lot of performance because you're doing all the book-keeping even on caught errors where you don't use that information)
The instrumentation and observability are more heavyweight than the overhead of unwinding the stack which is already keeping track of the most important information (in most mainstream langauges, at least. And even if you don't have a contiguous stack there's usually still the same information around at the point an error is created, assuming that you have something like functions that are returning into other functions. Exceptions, as a model, basically allow the code that raises an error to determine where the error is going to be caught without unwinding and removing the information that lets you track from the top level to where the error was raised). It is still tradeoff, of course (returning errors is more expensive than success), but it's one in a much better place in practice than other options (as obvious by the fact that errors-as-values implementations rarely keep this information around, especially not by default)
e.g. VCs invest in startups commercializing open-source foundational/infrastructure projects not only for the financial RoI, but also because it helps their portfolio companies succeed faster while maintaining a smaller headcount or spending less on non-core R&D.
> Full-time cofounder(s) should have the right to fire other cofounders,
I'd guess there's usually no point in a technical co-founder "firing" their capable business co-founder; it just ends the company.
While the technical person is spending most of their time on technical bits (no matter how much customer-facing product management time you have), etc., the business person is spending most of their time on relationships (investors, partners, customers, etc.). To a large extent, they take those relationships and reputation with them wherever they go next.
Unless the technical cofounder has some very rare and marketable technical expertise that the business people recognize (e.g., some recent big AI invention, or a fancy title at a FAANG), the technical cofounder will probably be considered an ordinary commodity by most.
> IMO an MBA is not the kind of degree that is worthy of delaying founding a startup
My guess is that an MBA from one of the most prestigious programs is usually worth delaying founding a startup. The MBA student can lay some of the groundwork while in the program, get mentoring and connections, and then out-execute many competitors once the student graduates.
The kinds of startups a lot of us have been thinking of are essentially the last 20 years of mostly ZIRP investment scams, but that can't go on forever (current dotcom-bubble-like "AI" hype wave low barriers to acquihire exit notwithstanding). I'd guess more people will have to do viable businesses than our field has in a long time.
I think it belongs to type, but since they use “auto” it looks standalone and can be confused with the “&” operator. I personally always used * and & as a prefix of the variable name, not as a suffix in the type name, except when used to specify types in templates.
IMO it's a separate category of modifiers/decorators to the type, like how adjectives and nouns are distinguished, and the only reason we have the false-choice in C/C++ is because it's not alphanumeric (if the token were e.g. "ref" it would interfere with the type or variable name in either other convention).
If I were forced at gunpoint to choose one of the type or name, "obviously" I would also choose type.
Best practice is to set a long expiration date, such as 1-2 years. There are different regulations about it in different states. After that unused credits can be accounted as breakage revenue.
If a company treats credits as money, it will have to comply with numerous financial regulations. For example, if a company compensates for SLA breaches with cash rather than credits, this could be considered insurance.