Hacker Newsnew | past | comments | ask | show | jobs | submit | wmfiv's commentslogin

BCBS CA revenue is approximately $25B. The total of above is $25.6M. That's 0.1%.

You may view those salaries as appropriate for leading companies of this size or immoral and outrageous. But either way executive comp is not the big problem with US healthcare costs.


$14,570 per person is our healthcare cost per capita.

For one thing, cutting out even that tiny 0.1%, that's a savings of $15 a year if I wasn't paying my insurance company's CEO. I would absolutely love to keep that $15. The idea that more than one dollar every single month from every single person is going to the CEOs of all our healthcare services is actually INSANE when you think about it.

0.1% is actually a LOW amount for some entities in the system. For example, the Cleveland Clinic spends 0.4% of revenue on executive compensation: https://projects.propublica.org/nonprofits/organizations/340...

That really means that out of my $14,570 yearly healthcare cost I could be paying something like $5/month just on executive salary. Who knows, maybe it's even more!

This is, again, insane. Why do Cleveland Clinic executives need to be paid $30 million/year?

This isn't administrative cost, like all the hard-working people who do the clerical work that keeps these systems operating. This is just the salaries of an extremely small group of people, less than 10 people per company.

All of these entities are allowed to make excess profit and/or have loose definitions of non-profit status, and pay CEOs dozens to hundreds of times the salary of their lowest paid employees. There isn't really a limit to the amount they can compensate top executives.


> Why do Cleveland Clinic executives need to be paid $30 million/year?

So they can hire bodyguards?


And? Do they do 30x more work than other companies C-suite? Do they have multiple PhDs or some impossible to find skill that makes them better than other people doing the same job? If such small fractions of money are so inconsequential, why are they nickel and diming all their customers and the healthcare system?

It isn't even a secret that these positions are largely based on networking and inter-company politics, not their skills or productivity or often even any real merit. Maybe a couple bucks isn't much to you, but it is to other people. And we haven't even gotten into all the non-cash extras that are often a huge bonus on top of their actual salary. How many more doctors is 25 million dollars could that be providing? How many lives saved?


Work jobs in the order they were submitted within a partition key. This selects the next partition key that isn't locked. You could make it smarter to select a subset of the jobs checking for partition keys where all of the rows are still unlocked.

  SELECT
  * 
  FROM jobs 
  WHERE partition_key = (
    SELECT partition_key 
    FROM jobs 
    ORDER BY partition_key 
    LIMIT 1
    SKIP LOCKED
  )
  ORDER BY submitted_at
  FOR UPDATE SKIP LOCKED;


Yes, something along the lines could work. But I am not sure if the above query itself would work if rows are appended to the table in parallel.

Also if events for a partition gets processed quick would the last partition get an equal chance?


I think all your points are valid, but I've also had good results using workspaces for environments. Here's generally how I structure my terraform primarily targeting AWS.

- 1 Terraform workspace per environment (dev, test, prod, etc.).

- Managing changes to workspaces / environments is done with whatever approach you use for everything else (releasing master using some kind of CICD pipeline or release branches). The Terraform is preferably in the same git repositories as your code but can be separate.

- The CICD tool injects an environment variable into the build to select the appropriate workspace and somehow supplies credentials granting access to a role that can be assumed in the appropriate account.

- A region module / folder that defines the resources you want in each region. This is your "main" module that specifies everything you want.

- Minimal top level terraform that instantiates multiple AWS providers (one for each region) and uses them to create region modules. Any cross region or global resources are also defined here.

- The region module uses submodules to create the actual resources (RDS, VPCs, etc.) as needed.

This approach assumes you want to deploy to all your regions in one go. That may not be the case.


Maybe there aren't too many itches left to scratch?

Between Dynamodb, Cassandra, and Scylla seems like that problem set is somewhat a solved problem? I know those products continue to move forward, but they all work really well at this point and solve the fundamental problem to a good degree.


I've found actors (Akka specifically) to be a great model when you have concurrent access to fine grained shared state. It provides such a simple mental model of how to serialize that access. I'm not a fan as a general programming model or even as a general purpose concurrent programming model.


Vert.x has the "Verticle" abstraction, which more or less corresponds to something like an Actor. It's close enough to where I don't feel like I'm missing much by using it instead of Akka.


What are your criticisms of actors as a general purpose concurrent programming model?


This has been the common/best practice for so long I don't understand why TFA is proposing something different.


Cache control directives indicate how long a browser, proxy etc can store a resource for… they don’t guarantee they will store it for that long though

Control over Service Workers cache lifetime is more explicit

I’d still specify ‘good’ cache lifetimes though


Makes sense as a theoretical problem. Have you ever seen data that suggests it's a practical problem? Seems like one could identify "should be cached forever but wasn't" using etag data in logs.


Facebook did a study about ten years or so back where they placed an image in the browser cache and then they checked how long it was available for… for something like 50% of users it had been evicted within 12hrs

If one of the most popular sites on the web couldn’t keep a resource in cache for long then most other sites have no hope, and that’s before we consider that more people are on mobile these days and so have smaller browser caches than on desktop


From the discussion above it seems that browsers have changed their behaviour in the last 10 years based on that study.

See: https://news.ycombinator.com/item?id=42166914


Browsers have finite cache sizes… once they’re full the only way to make space to cache more is to evict something even if that entry is marked as immutable or cacheable for ten years


More than highlight they'll do schema validation against inline SQL strings also.


With due respect, I think you've misunderstood the single-table design pattern.

Because you've introduced static hash keys ("user", "email", etc) you've had to manually partition which DDB should do for you automatically. And while you covered the partition size limit you're also likely to have write performance issues because you're not distributing writes to the "user" and "email" hash keys.

Single-table design should distribute writes and minimize roundtrips to the database. user#12345 as a hash key and range keys of 'User', 'Email#jo@email.com', 'Email#joe@email.com', etc achieve those goals. If you need to query and/or sort on a large number of attributes it's going to be easier, faster, and probably cheaper to stream data into Elasticsearch or similar to support those queries.


Please look at your individual situation and don't take this suggestion blindly. If you've already contributed significantly to your deductible or out-of-pocket maximum it can definitely make sense to continue with COBRA.

Also you can game the COBRA enrollment window. You have 60 days from your loss of coverage to elect COBRA and once you elect COBRA you have another 45 days to submit payment. You can elect on the 59th/60th day and then pay 45 days later if you ended up needing the coverage. If you don't need the coverage don't pay.


> Please look at your individual situation and don't take this suggestion blindly.

Exactly. The message here is it is incredibly important to re-evaluate your healthcare plan. Every household is going to be different! Bust out excel and crunch the numbers.


It does. But if you're concerend about this (and many of the other items mentioned), you can control access to those features using IAM.

https://docs.aws.amazon.com/service-authorization/latest/ref...

The condition keys specifically are here and you can see keys to control access to storage class, tagging, etc.

https://docs.aws.amazon.com/service-authorization/latest/ref...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: