> I think the headline is problematic because it suggests the raw photos aren't very good and thus need processing
That’s not how I read either the headline or the article at all. I read it as “this is a ‘raw photo’ fresh off your camera sensor, and this is everything your camera does behind the scenes to make that into something that we as humans recognize as a photo of something.” No judgements or implications that the raw photo is somehow wrong and something manufacturers should eliminate or “game”
But that was the point the OP was making. Not that you couldn’t differentiate between white balance correction and generative fill, but rather that the intent of the change matters for determining if an image is “fake”.
For example, I took a picture of my dog at the dog park the other day. I didn’t notice when framing the picture but on review at home, right smack in the middle of the lower 3rd of the photo and conveniently positioned to have your eyes led there by my dog’s pose and snout direction, was a giant, old, crusty turd. Once you noticed it, it was very hard to not see it anymore. So I broke out the photo editing tools and used some auto retouching tool to remove the turd. And lucky for me since the ground was mulch, the tool did a fantastic job of blending it out, and if I didn’t tell you it had been retouched, you wouldn’t know.
Is that a fake image? The subject of the photo was my dog. The purpose of the photo was to capture my dog doing something entertaining. When I was watching the scene with my own human eyes I didn’t see the turd. Nor was capturing the turd in the photo intended or essential to capturing what I wanted to capture. But I did use some generative tool (algorithmic or AI I couldn’t say) to convincingly replace the turd with more mulch. So does doing that make the image fake? I would argue no. If you ask me what the photo is, I say it’s a photo of my dog. The edit does not change my dog, nor change the surrounding to make the dog appear somewhere else or to make the dog appear to be doing something they weren’t doing were you there to witness it yourself. I do not intend the photo to be used as a demonstration of how clean that particular dog park is or was on that day, or even to be a photo representing that dog park at all. My dog happened to be in that locale when they did something I wanted a picture of. So to me that picture is no more fake than any other picture in my library. But a pure “differentiate on the tools” analysis says it is a fake image, content that wasn’t captured by the sensor is now in the image and content that was captured no longer is. Fake image then right?
I think the OP has it right, the intent of your use of the tool (and its effect) matters more than what specific tool you used.
Everyone knows what is meant by a real vs fake digital photo, it is made abundantly clear by the mentions of debayering and white balance/contrast as "real" and generative fill as "fake". You and some others here are just shifting the conversation to a different kind of "fake". A whole load of semantic bickering for absolutely nothing.
Well you changed the photo from an accurate representation of the scene into what you felt like it should be. The photo is no longer "real" but a story. Stories tell us things that are true too, but not in a physical evidence kind of way.
And this is why I think intent matters more than the tools. Lets say I had framed the photo when I took it such that the turd was not in the frame. Is that a "fake" photo because I framed it in such a way as to exclude something that was there?
And if shot composition doesn't make it fake, what if I cropped the photo after the fact? I'm removing something the camera captured to better make the picture what I "felt like it should be" just using the removal tool. That's functionally no different from framing the shot differently, but it's modifying the actual captured image.
If we decide that removal, whether by framing or by post-hoc cropping is still "real" and it's the use of a tool that adds something that wasn't there, would the same apply to just cutting a square out of the photo without cropping the rest of the frame? A transparent square would be an interesting artistic choice for sure, but does that then get into the realm of "fake"? What if the square is black or white? Is adding a clearly "post process censor bar" crossing a line into making the photo "fake"?
If those are fine, it's the "adding content that looks like it should be there" is the problem, does that mean that dust or hair removal to remove something that was on the lens make it fake since that would also have to generate what the computer thinks is behind that hair or dust speck?
For what it's worth, I don't think there is a hard line here, like I said intent matters. But I do think that figuring out where ones personal and general lines are and when they might move that line or not is an interesting thought experiment.
I think you're right, framing of the shot absolutely tells a story, but at the same time, when you can trust the veracity of the frame content at least you can say something about the real world even if you have to acknowledge there is a possibility of something just off-screen that would change your interpretation of events. Sometimes, the contents of the image betray that the real world diverges significantly from the author's intent, for example, by allowing you to deduce that what is claimed is actually impossible. So there is significant value in "real" images.
I don't know, removing the turd from that picture reminds me of when Stalin had the head of the NKVD (deceased) removed from photos after the purge. It sounds like the turd was probably the focus of all your dog's attention and interest at the time, and editing it out has created a misleading situation in a way that would be outrageous if I was a dog and capable of outrage.
I didn’t read the article as implying that the final image the author arrived at was “unprocessed”. The point seemed to be that the first image was “unprocessed” but that the “unprocessed” image isn’t useful as a “photo”. You only get a proper “picture”
Of something after you do quite a bit of processing.
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
That's not how I read it. As in, this is an incidental comment. But the unprocessed version is the raw values from the sensors visible in the first picture, the processed are both the camera photo and his attempt at the end.
This whole post read like and in-depth response to people that claim things like “I don’t do any processing to my photos” or feel some kind of purist shame about doing so. It’s a weird chip some amateur photographers have on their shoulders, but even pros “process” their photos and have done so all the way back until the beginning of photography.
Is it fair to recognize that there is a category difference between the processing that happens by default on every cell phone camera today, and the time and labor intensive processing performed by professionals in the time of film? What's happening today is like if you took your film to a developer and then the negatives came back with someone having airbrushed out the wrinkles and evened out skin tones. I think that photographers back in the day would have made a point of saying "hey, I didn't take my film to a lab where an artist goes in and changes stuff."
It’s fair to recognize. Personally I do not like the aesthetic decisions that Apple makes, so if I’m taking pictures on my phone I use camera apps that’s give me more control (Halide, Leica Lux). I also have reservations about cloning away power lines or using AI in-painting. But to your example, if you got your film scanned or printed, in all likelihood someone did go in and change some stuff. Color correction and touching the contrast etc is routine at development labs. There is no tenable purist stance because there is no “traditional” amount of processing.
Some things are just so far outside the bounds of normal, and yet are still world-class photography. Just look at someone like Antoine d’Agata who shot an entire book using an iPhone accessory FLIR camera.
But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).
Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.
Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.
My lived experience with human drivers and outages at intersections is most people get it very wrong. If you're lucky and the lit intersection is 1 lane in each direction, more often than not everything works out well. But any intersection with multiple lanes or especially an intersection that is one primary road and a lower traffic secondary is going to be full of people just flying through as if they were on green the whole time.
> Art eludes definition while asking questions about what it means to be human.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
> Let's say, in an alternate universe where Rubio's department genuinely thought there were cost or coordination issues with Calibri. They could have reversed the decision and cited that.
So apparently Daring Fireball (of all places) got their hands on the full memo text[1]. And in all of the text, there are 2 sentences total that refer to DEI at all, the rest of it is talking about those coordination and cost issues. So I guess they did do that, they just also had to take their shots at DEI because why be in politics these days if you can't virtue signal even the most standard of decisions.
It feels like the question that gets raised by this is why was there such a blind spot here? Apple's UI/UX decisions aren't for everyone to be sure, but they were a core differentiator for Apple. So if so many people inside saw Dye as a problem, and given the ever increasing dissatisfaction with the UX from customers, why was there a blink spot? Was it too much focus on the hardware side? Was it a change in overall care for UX beyond just Dye? Is it a company stretched too thin between all the various projects?
> Can I just give the same permission to iTerm? Nope. We are not worthy of that power, and must re-affirm permissions every 30 days for all non-Apple software.
Not sure what permission you're referring to or what your curl script is trying to do but `/opt/homebrew/opt/curl/bin/curl http://www.google.com` works just fine on Tahoe from both iTerm2 and ghostty. Looking through the various permission grants, the only one they both have in common is "App Management". They share some file permission grants, but where as iTerm has full disk access, ghostty only has Downloads and removable media. In the past I've found I've needed to add terminals like iTerm to the Developer Tools permission, but ghostty isn't in there currently and curl is still working just fine. And in none of these cases have I ever needed to re-affirm the permission every 30 days.
Any chance you have "disclaim ownership of children" setting enabled in iTerm? Maybe if iTerm is not allowing child processes to use its own permissions, you're having to re-authorize curl specifically (and it's getting updated about once every 30 days?)
> And if you don't accept them, they are silently denied.
This is IMO the correct behavior. If something asks for permission and it's not explicitly granted, then the default should always be denied.
> Not sure what permission you're referring to or what your curl script is trying to do but `/opt/homebrew/opt/curl/bin/curl http://www.google.com` works just fine on Tahoe from both iTerm2 and ghostty.
Mwwahahaha. Yep. Curling something neutral like google.com worked fine for me as well. That's how I was verifying that everything was OK.
This permission is so weirdly named and scary, and the applications never tell you why they're requesting it... on iOS it would be against the developer guidelines...
So I thought that might be the dialog you're talking about which is why I thought it was weird that ghostty didn't have it and curl seemed to work just fine. I also could swear that it did show you rejected apps in the list just with the permission turned off.
After experimenting a bit, it seems like:
1) You're right that it doesn't show the rejected apps in the list. Seems like the only way to find that is to query the tcc sqlite db.
2) The permission does apply equally to the built in `curl` as it does to the homebrew installed curl.
3) What it doesn't apply to apparently is the gateway address on your network, regardless of which app you use.
4) It also doesn't apply to all "private" IP space addresses, just ones that are on your subnet. So for example, I have an IOT subnet on my network on its own VPN with a route in the gateway for accessing it from some specific devices on the primary LAN. Without the permission, I can ping and curl (with both the built in and homebrew versions) all of the devices on the IOT subnet. But I can't ping or curl (again with either version) any of the devices on the LAN subnet. Turn the permission on and I can hit everything on the local subnet fine from all the devices.
5) I also validate that the above rules are true even for an application (alacritty in this case) that had never been given permission (in case setting and then removing the permission did something odd)
> The keyword is SILENTLY. The permission requests should be logged and made available in a central location, where they can be reviewed.
This I agree on, the rejected apps should show in the privacy permissions, even if in a collapsed tab/pane so that you can review later. I could swear it used to do this, but maybe I'm thinking of iOS which does do that.
> 2) The permission does apply equally to the built in `curl` as it does to the homebrew installed curl.
I think this might have been fixed? `codesign -dvvv /usr/bin/curl` no longer prints anything about permissions. I definitely remember investigating this particular point.
> 3) What it doesn't apply to apparently is the gateway address on your network, regardless of which app you use.
Doesn't work for me. I can't ping or HTTP into my gateway from a terminal app that doesn't have this permission.
Edit: apparently pinging the gateway works if you're on WiFi. But not with wired Ethernet. Wow.
That’s not how I read either the headline or the article at all. I read it as “this is a ‘raw photo’ fresh off your camera sensor, and this is everything your camera does behind the scenes to make that into something that we as humans recognize as a photo of something.” No judgements or implications that the raw photo is somehow wrong and something manufacturers should eliminate or “game”
reply