Hacker Newsnew | past | comments | ask | show | jobs | submit | Ianvdl's commentslogin

It is opt-in by default for non-commercial licenses:

> For companies: Admins can enable data sharing at a company-wide level. To support early adopters, we’re offering a limited number of free All Products Pack subscriptions to organizations willing to participate while we explore this program. For companies that are not willing to opt in to the program, nothing changes, and as always, admins are in control. > For individuals on non-commercial licenses: Data sharing is enabled by default, but you can turn it off anytime in the settings. > For individuals using commercial licenses, free trials, free community licenses, or EAP builds: Nothing changes. You can still opt in via the settings if you are willing to share data with JetBrains (and your admins, if any, allow it).

And the detail of what is collected:

> We’re now adding the option to allow the collection of detailed code‑related data pertaining to IDE activity, such as edit history, terminal usage, and your interactions with AI features. This may include code snippets, prompt text, and AI responses.


"Opt-in by default" is a contradiction. Do I have to actively do something to turn it on, in which case it's opt-in; or is it on by default, in which case it's not opt-in?


> opt-in by default

That's called opt-out.

> Data sharing is enabled by default, but you can turn it off anytime

I wonder if this is legal in the EU?


I thought I'd share with the HN crowd. I generally keep up with this kind of thing, but I hadn't seen anyone else mention that you can enable it on the stable channel.


> Statement

> The flaw affects RHEL9 as the regression was introduced after the OpenSSH version shipped with RHEL8 was published.


However, we see the -D option on the listening parent:

  $ ps ax | grep sshd | head -1
     1306 ?        Ss     0:01 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
As mentioned elsewhere here, is -D sufficient to avoid exploitation, or is -e necessary as well?

  $ man sshd | sed -n '/ -[De]/,/^$/p'
     -D      When this option is specified, sshd will not
             detach and does not become a daemon.  This
             allows easy monitoring of sshd.

     -e      Write debug logs to standard error instead
             of the system log.
RHEL9 is also 64-bit only, and we see from the notice:

"we have started to work on an amd64 exploit, which is much harder because of the stronger ASLR."

On top of writing the exploit to target 32-bit environments, this also requires a DSA key that implements multiple calls to free().

There is a section on "Rocky Linux 9" near the end of the linked advisory where unsuccessful exploit attempts are discussed.


>As mentioned elsewhere here, is -D sufficient to avoid exploitation, or is -e necessary as well?

https://github.com/openssh/openssh-portable/blob/V_9_8_P1/ss...

sshd.c handles no_daemon (-D) and log_stderr (-e) independently. log_stderr is what is given to log_init in log.c that gates the call to syslog functions. There is a special case to set log_stderr to true if debug_flag (-d) is set, but nothing for no_daemon.

I can't test it right now though so I may be missing something.


I'm on Oracle Linux, and they appear to have already issued a patch for this problem:

  openssh-8.7p1-38.0.2.el9.x86_64.rpm
  openssh-server-8.7p1-38.0.2.el9.x86_64.rpm
  openssh-clients-8.7p1-38.0.2.el9.x86_64.rpm
The changelog addresses the CVE directly. It does not appear that adding the -e directive is necessary with this patch.

  $ rpm -q --changelog openssh-server | head -3
  * Wed Jun 26 2024 Alex Burmashev <alexander.burmashev@oracle.com> - 8.7p1-38.0.2
  - Restore dropped earlier ifdef condition for safe _exit(1) call in sshsigdie() [Orabug: 36783468]
    Resolves CVE-2024-6387


So in other words, -De is not a workaround. -Dde might be but it will cause more log output than is wanted.


-De is a workaround. -D is not.


Speaking of Rocky 9, they suggest to get the new version from the SIG/Security repository:

https://rockylinux.org/news/2024-07-01-rocky-linux-9-cve-202...


They never responded to my original ticket, so I didn't bother logging a second one.

I've done DNS transfers to other registrars without issues before, so I expected this one to be the same.


Yeah that's why I didn't expect anything to change - if I missed a checkbox somewhere I'd consider it to be a dark pattern, honestly.


I'm the author of this post. If perhaps you're wondering why I didn't just transfer the domain to Cloudflare, it's because they do not support the South African TLD, unfortunately.


You can find a suitable registrar from here: https://tld-list.com/tld/co.za

Also, Route53 supports it: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/co...


Thanks for the list! Personal recommendations would always help, but the reference list is useful.

It's good to know Route53 is an option - I might just make the switch tomorrow after getting some sleep.


I use route53 for my site (brooker.co.za), and have had no complaints. Much nicer than dealing with uniforum's weird email templates.

Disclaimer: I work at AWS.


Thanks for the +1 for Route53, and thanks for mentioning your blog - the content looks really nifty! I'll have a read in the morning.


Welcome, porkbun would have been my recommendation, but it sadly doesn't support it!


I’ve been happy with with https://www.inwx.com/en for many years. No-frills.


If you’re still looking for an alternative, https://dnsimple.com is an excellent engineer focused no bullshit domain registrar and they support .co.za


Thanks! I'll compare with Route53 mentioned above and decide the way forward. I was quite happy with Gandi's similar no bullshit approach for the past few years until the abrupt mailbox change last year.


As someone with limited experience with containers, how does K8s allow you to move away from things like Puppet for configuration management? Does it offer some substitute that alleviates the need for something like Puppet or Ansible?


Yes because whatever you used for app specific configuration like libraries and packages is now done in the Dockerfile and containerized. So the same thing run locally is run in the cloud. Then as far as the infrastructure for running code such as load balancers, service discovery, docker.. That is all given to you just by running K8s. So you are more concerned with shipping immutable containers to k8s than provisioning "machines".

Then you can focus on containers which can be run, tested and built wherever without the fear of broken updates or one thing stepping on another. We found back in the days of ansible and chef that we had very low confidence in upgrading hosts live. So we would then do immutable hosts and blue green deploy them to production. But why think in the scope of hosts and VMs when really you have some application that needs to run somewhere.

K8s IMO isn't the end all, I think eventually we will get to something that doesn't need containers at all and you run just processes. But it is a good step for now. Also once you have your stuff containerized it makes other non k8s stuff easy like AWS Lambda

Edit: Also yes you can use those to set up generic k8s nodes but when we ran bare metal we used kubeadm to make coreOS immutable nodes. I don't think that is used anymore haven't checked but really the best way to set up k8s is to deploy really thin hosts that have nothing but Docker and k8s. VMware and others have solutions for this too where you don't have to mess with building hosts.


Thanks for the detailed response. It looks like I've still got a lot to learn - I've just lately been playing with LXC to get more familiarised with containers. I've previously looked at Helm apps and they seemed to be very similar to Puppet manifests. From what you said it seems like the approach is to have immutable containers for each application, set up via Dockerfiles, which somehow also simplifies the upgrade process? Does that mean you just deploy a new version/container of an application linked to the same underlying database (for example) when you need to run an upgrade?

So if you had a fleet of ten containers running the same application in a load balanced config, I'm guessing you'd need to upgrade all of them at once (with downtime) rather than upgrading them one by one (because then the database would be inconsistent)? I'm assuming that since the containers are immutable the data is stored elsewhere.


Helm is so bad it raises my blood pressure just hearing the name. Helm tries to apply the old way of doing things (like as you said Puppet) and makes it worse than ever. K8s yaml config is simple and elegant, don't try hiding it under templates. Kustomize is the proper way of working with K8s yaml. Helm fights against it.


Helm isn't great (god, Go templates, shudder), but I love being able to bundle all the manifests for an application together with well-documented parameters/values... If you want do something like conditionally switch from a LoadBalancer service to Ingress in a test environment... I have no idea how you'd handle that in Kustomize, but it's straightforward in Helm. Ultimately it seems like you end up with the same complexity, just expressed via 100 layers of Kustomize transforms instead of with a conditional in your template.


There are two different types of patches in Kustomize. It does take a bit to get used to it as it's different. jid (Interactive jq) makes the process a lot easier. It's actually pretty easy to make changes like this.


> So if you had a fleet of ten containers running the same application in a load balanced config, I'm guessing you'd need to upgrade all of them at once (with downtime) rather than upgrading them one by one (because then the database would be inconsistent)?

That depends entirely on your application and the upgrade itself. Assuming we are discussing 10 different containers (e.g. 10 micro-services), k8s will normally update them in parallel but not atomically, it would be up to apication or deployment time logic to ensure they are updated as a single 'transaction'. If they are 10 copies of the same container, then k8s itself has tools for rolling upgrades where you can control the rollout.

Also, depending on application logic, the upgrade could be done in such a way that there is no need to synchronize the services, they could work with the DB as is.


In Kubernetes, deploying 10 containers takes a minute or two. I haven't worked with incredibly large deployments, but really deploying any amount of containers could easily take a minute or two if you have enough nodes. There is no downtime. Database inconsistency can cause problems but also any problems like that can be mitigated by doing a two-phase change to the database, and such changes are pretty rare and also devs instinctively avoid making those kinds of schema changes.


You get the load balancing for free in K8s and rolling deploys. What you do is upgrade the deployment with a new docker image, and yes immutably it is replaced. In a case of an HTTP service, k8s will wait until a pod (container) responds healthy until it is put in the loop. Then it steps down old pods according to your rolling deploy metrics and replaces them. You can define what that is like having a max number of pods with a minimum number of pods up.


> So the same thing run locally is run in the cloud

Who is preparing Dockerfiles? Developers and system administrators / security people do not generally prioritize same things. We do not use k8s for now (therefore I know very little about it), so this might not be relevant but how do you prevent shipping insecure containers?


Generally developers. When running in a container most of the attack surface is the app itself, and if it is compromised the damage is supposed to be limited to the container. There have been container escape exploits in the past though. But with a container you treat the container as the thing that you run and give resources to and don't trust it just like if you were running an application. All of the principles of giving an application resources such as least privilege apply to containers too.

But since you are not running multiple things or users in one space in a container, something such as an out of date vulnerable library can't be leveraged to gain root access to an entire host running other sensitive things too.

In Kubernetes and docker in general one container should not be able to compromise another, or k8s. But there are other issues if an attacker can access a running container such as now having network access to other services like databases. But again these are all things that can be locked down and should be even if provisioning hosts running things.


You still need something to provision the base OS and all the stuff under K8s (docker daemon, ntp, storage, networking, etc.) that it relies on, unless you go with a fully hosted solution.

Ansible or Puppet still excel at that kind of work.


And it looks like the parent went with a hosted solution which explains everything. Having to manage all the underlying services that k8s glues together is a huge PITA.


It enables 12 factor apps, and if you are using a cloud provider there is no infra setup, so you do not need puppet/ansible/etc. It's a better way to deploy apps hands down.


Thank you for posting. I see it briefly mentions the practice for the remembrance (although I don't know if it's specific to this initiative or whether it is more widespread):

> On November 28, 2020, light a candle in your window, as a memory of every bright soul tortured during the Holodomor of 1932-1933 – the genocide of the Ukrainian nation.

> We want everyone to be remembered – starved, unborn, numbered, and UNCOUNTED


Yes, today is the day. Candle lighting is not specific to this initiative, or this day -- it's common theme for all remembrance days to set a candle in the window for those who didn't make it through history. There is a day in May, dedicated to victims of political repression and other two dedicated to Crimean Tatars and Holocaust, all of those are marked with candles.


Thanks for mentioning QMK [0], it's not something I've heard of before, and it seems like a huge step up from standard keyboard firmware. As far as I can tell only a small number of manufacturers officially support it, but at least one person seems to have made some progress getting a standard cooler master keyboard to run it [1].

[0] https://qmk.fm/ [1] https://github.com/qmk/qmk_firmware/tree/master/keyboards/bp...



That's quite helpful, thank you. The Arduino option also seems to be a useful alternative.


It’s reasonably simple to port QMK (at least partially) to a new microcontroller.

I have worked on Teensy 3.6 support for QMK, and plan to work on Teensy 4.x support as well.


I'm sorry to barge in,but I am in dire need of what you are working on. Could you share your progress or material, Sir?


Is any special hardware needed, or is it all software work?


It's more like full-hardware-no-software work.

Basically if you want to use this on your "standard" keyboards bought off the shelf, you have to either:

1. Open your keyboard, replace the keyboard controller with something that you can program via soldering etc.

or 2. Pray your keyboard doesn't use a mask ROM to store its firmware, and it has a powerful enough MCU as keyboard controller, then figure out how to reflash it and port QMK to it blindly.


Drop (née Massdrop) makes a number of fairly 'normal' QMK keyboards. From smallest to largest, the Alt, Ctrl, and Shift.

Then there's Ergodox EZ, a company which makes the eponymous ortholinear split key (my daily driver), as well as a 40% keyboard called the Planck, and a new next-gen ergo keyboard called the Moonlander. These are designed from the get go to use layers, where each layer assigns its own meaning to keys, and that's where QMK gets really powerful.

There's a whole ecosystem of kits and group buys if you're looking to make a hobby out of it, but if you just want a keyboard you can program, these have you covered.


For people based in Europe who might be interested in an Ergodox or similar, I can recommend a visit to falba.tech.

I've ordered two Redox from them by now (for work & home) and am very satisfied with the product so far.


As someone who uses Vim daily, I quite like this idea. I would customise it to mimic the original Vim bindings a bit more closely where possible though, to e.g. keep Ctrl-D as page down.

I'm not sure if this would be possible, but it would be fun to have customisable per-application bindings (based on the active window, probably) in addition to the set of global bindings. I can also think of minor cases where motions would be useful, like 3x alt-tab (i.e. 3gt) to switch to a specific application.

I wonder at what point the depth of the mode-ness will become too much, if you were running this, for example, with a vi-mode bash readline.


> e.g. keep Ctrl-D as page down.

I tried doing the same using xdotool by binding Ctrl+F<something> to PageDown but for some reason it didn't work for some reason and I didn't investigate enough.


I believe a possible reason is that the OP used xkb, which if I'm not mistaken is lower level than xdotool (so xdotool would not suffice)


I am the OP. What I tried was something along these lines.

e.g. to get w in normal mode to send Ctrl+right, I did the following.

1. Bind w to F14 in normal mode.

2. In i3 config, add `bindsym 0xffcb exec xdotool key ctrl+Right`

If I simply ran the command (`xdotool key ctrl+Right`) in normal mode, it would work but it didn't work via a binding.

This was infact the initial idea behind all this but after quite some testing xdotool turned out to be unreliable so I repurposed it as a shortcut layer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: