yea imagine having to maintain a python dependency (which undergoes security constraints) all because some junior cant read/write bash... and then that junior telling you you're the problem lmao
You have no issues with handing over your cell phone to a police officer who pulls you over? I imagine you'll say "all I have to do is present an ID", but what if the officer cannot read it, so he wants to hold it? Okay you wont let him hold it, so he bends down and gets real close to your phone? You know he must verify the address against his database, so you're going to make him write down your address as he walks back to his car? Yea some people cannot afford this type of friction when dealing with police officers.
Places that support Apple's digital ID / Wallet state ID do so by utilizing a an identity reader that the user taps their phone against after selecting which info they want to convey. It is not meant for the owner to physically show the phone to the requesting party.
I'm sure this will happen in some cases especially in the interim where digital ID is technically not accepted but the person doesn't have their physical ID. An example would be a traffic stop in a state that currently supports digital state ID since usually the digital ID is basically only supported at TSA currently. But the cop looking at your phone doesn't add any more authenticity vs you just verbalizing the info and them writing it down which is what they usually do if someone has a photo of their missing ID.
Also, Apple cleverly designed it so if phone is in locked state and you activate wallet and select ID, the biometric scan it does doesn't unlock the entire phone and trying to get into the rest of phone requires another biometric scan or phone password.
From the article "Users do not need to unlock, show, or hand over their device to present their ID."
That assumes every LEO, bodega, grocery store, etc. goes out and buys the terminals to read these things though. Assuming LEOs you encounter will never just say "sorry, my reader is broken, go ahead and unlock your phone so I can bring it back to my car and type in your info to look up your license" feels naive, especially for folks with darker skin tones.
I agree it's possible to present your ID without unlocking your phone, but is it likely?
It is likely that the reader will be implemented into a phone / tablet app so will be pretty ubiquitous eventually.
FWIW it will never technically be legal to rely on visually looking at someones phone to verify age / id bc it would be incredibly easy to fake the display and physical interaction. The only reason it can work as an ID is if it is digitally verified by a reader.
But I do agree that especially in the interim there will be cases where LEO will coerce a phone handover but I don't think this will be a long term problem since physical interaction with the phone does nothing to verify authenticity. You may as well hand them a piece of paper you printed out with your info.
> It is likely that the reader will be implemented into a phone / tablet app so will be pretty ubiquitous eventually.
They may wind up ubiquitous, but reader usage will be determined by the officers in the field, on a case by case basis. Take a look at jurisdictions where body cameras are mandated but are turned off, or intentionally obstructed.
I think you're forgetting about flights. And country borders. And I'm not sure how much luck you're going to have opening a bank account... does your birth certificate work for that?
As for birth certificate, it is a document that shows you are a citizenship and does identify you and can be used. It’s common in other countries as a document to open back accounts so in many ways it can in the USA.
Unless you're trying to be unnecessarily pedantic, yes you do.
There are exceptions like if you're a minor, if you're a passenger on a general aviation flight instead of a commercial one, etc.
But if you're an adult and you want to hop on a commercial flight from JFK to LAX, you need government-issued photo ID, period. You're not getting through security otherwise.
"I forgot my identification; can I still proceed through security screening?
In the event you arrive at the airport without acceptable identification (whether lost, stolen, or otherwise), you may still be allowed to fly. By providing additional information, TSA has other ways to confirm your identity so you can reach your flight."
Every USA airline has this documentaiton, and TSA has this documentation on their website too.
>Unless you're trying to be unnecessarily pedantic, yes you do.
Edge cases should be accounted for IRL. People like you are the reason why the right to privacy is being eroded away constantly.
>But if you're an adult and you want to hop on a commercial flight from JFK to LAX, you need government-issued photo ID, period. You're not getting through security otherwise.
The evidence I presented above states you can. Whether or not it's seamless or comfortable isn't a discussion, nor should it be "pedantic" to know the rules presented by these organizations.
If you forget your ID, then the extra screening will attempt to find existence of the ID in databases. The ID still needs to have been issued in the first place. This is mainly if you lose your driver's license while traveling, it lets you get back home. It is in no way some kind of general-purpose mechanism for flying without ID. And it's a gigantic hassle that may take hours at the airport to sort out, leading you to miss your flight and wait to be rebooked.
You're absolutely being pedantic and argumentative, and I can't even begin to imagine why. I assume you know perfectly well that an ID is required to fly as the general rule. I can't imagine what you think you're trying to accomplish by arguing otherwise.
And if you really want to be pedantic, note that the word used is "may", not "shall":
> In the event you arrive at the airport without acceptable identification (whether lost, stolen, or otherwise), you may still be allowed to fly.
Microsoft's Ubuntu image seems to be ready. I guess I could see a reason to use regular Ubuntu 24 and then install dotnet manually, but these images have served us well.
docker pull mcr.microsoft.com/dotnet/sdk:10.0 - Refers to Ubuntu 24.04 "Noble Numbat"
docker pull mcr.microsoft.com/dotnet/sdk:10.0-noble - Refers to Ubuntu 24.04 "Noble Numbat"
What's the usecase for this? Texting your classmates right next to you when wifi is down and you didnt pay your phone bill? It requires peers to be running bitchat. So is this only going to be useful if everyone has bitchat installed?
I thought this entire premise was obvious? Does it really take an article and a venn diagram to say you should only provide the relevant content to your LLM when asking a question?
"Relevant content to your LLM when asking a question" is last year's RAG.
If you look at how sophisticated current LLM systems work there is so much more to this.
Just one example: Microsoft open sourced VS Code Copilot Chat today (MIT license). Their prompts are dynamically assembled with tool instructions for various tools based on whether or not they are enabled: https://github.com/microsoft/vscode-copilot-chat/blob/v0.29....
You have access to the following information to help you make
informed suggestions:
- recently_viewed_code_snippets: These are code snippets that
the developer has recently looked at, which might provide
context or examples relevant to the current task. They are
listed from oldest to newest, with line numbers in the form
#| to help you understand the edit diff history. It's
possible these are entirely irrelevant to the developer's
change.
- current_file_content: The content of the file the developer
is currently working on, providing the broader context of the
code. Line numbers in the form #| are included to help you
understand the edit diff history.
- edit_diff_history: A record of changes made to the code,
helping you understand the evolution of the code and the
developer's intentions. These changes are listed from oldest
to latest. It's possible a lot of old edit diff history is
entirely irrelevant to the developer's change.
- area_around_code_to_edit: The context showing the code
surrounding the section to be edited.
- cursor position marked as ${CURSOR_TAG}: Indicates where
the developer's cursor is currently located, which can be
crucial for understanding what part of the code they are
focusing on.
I get what you're saying, but the parent is correct -- most of this stuff is pretty obvious if you spend even an hour thinking about the problem.
For example, while the specifics of the prompts you're highlighting are unique to Copilot, I've basically implemented the same ideas on a project I've been working on, because it was clear from the limitations of these models that sooner rather than later it was going to be necessary to pick and choose amongst tools.
LLM "engineering" is mostly at the same level of technical sophistication that web work was back when we were using CGI with Perl -- "hey guys, what if we make the webserver embed the app server in a subprocess?" "Genius!"
I don't mean that in a negative way, necessarily. It's just...seeing these "LLM thought leaders" talk about this stuff in thinkspeak is a bit like getting a Zed Shaw blogpost from 2007, but fluffed up like SICP.
most of this stuff is pretty obvious if you spend even an hour thinking about the problem
I don't think that's true.
Even if it is true, there's a big difference between "thinking about the problem" and spending months (or even years) iteratively testing out different potential prompting patterns and figuring out which are most effective for a given application.
I was hoping "prompt engineering" would mean that.
OK, well...maybe I should spend my days writing long blogposts about the next ten things that I know I have to implement, then, and I'll be an AI thought-leader too. Certainly more lucrative than actually doing the work.
Because that's literally what's happening -- I find myself implementing (or having implemented) these trendy ideas. I don't think I'm doing anything special. It certainly isn't taking years, and I'm doing it without reading all of these long posts (mostly because it's kind of obvious).
Again, it very much reminds me of the early days of the web, except there's a lot more people who are just hype-beasting every little development. Linus is over there quietly resolving SMP deadlocks, and some influencer just wrote 10,000 words on how databases are faster if you use indexes.
That doesn't strike me as sophisticated, it strikes me as obvious to anyone with a little proficiency in computational thinking and a few days of experience with tool-using LLMs.
The goal is to design a probability distribution to solve your task by taking a complicated probability distribution and conditioning it, and the more detail you put into thinking about ("how to condition for this?" / "when to condition for that?") the better the output you'll see.
(what seems to be meant by "context" is a sequence of these conditioning steps :) )
Okay fair point, thank you for that info. This makes more sense for the people who are developing prompt generation for some tool (like copilot, claude code, etc) yet I was thinking of it more from a user standpoint (like asking chatgpt, gemini, etc).
The industry has attracted grifters with lots of "<word of the day> engineering" and fancy diagrams for, frankly, pretty obvious ideas
I mean yes, duh, relevant context matters. This is why so much effort was put into things like RAG, vector DBs, prompt synthesis, etc. over the years. LLMs still have pretty abysmal context windows so being efficient matters.
I think that experiment was very cool, but I will say that the OAuth2.0/OIDC protocol is very well documented and there are tons of tools already built around it in multiple languages.
I implemented the OAuth2.0 protocol in 3 different languages without a 3rd party library - entire spec implemented by hand. This was like ~2015 when many of the libraries that exist today didn't back then. I did this as a junior developer for multiple enterprise applications. At the end of the day it's not really that impressive.
Three weeks ago I did basically the same thing as the author of the Cloudflare story, but I did it with my own open source tool. I went into the experiment treating Claude Code as a junior engineer and guiding it on a feature I wanted implemented.
In a single Saturday the LLM delivered the feature to my spec, passing my initial test cases, adding more tests, etc…
I went to bed that night feeling viscerally in my bones I was pairing with and guiding a senior engineer not a junior. The feature was delivered in one day and would have taken me a week to do myself.
I think stories like the Cloudflare story are happening all over right now. Staff level engineers are testing hypotheses and being surprised at the results.
Oauth 2.0 doesn’t really matter. If you can guide the model and clearly express requirements, boundaries, and context, then it’s likely to be very useful and valuable in its current form.
This is a great example of how no example provided is ever good enough. There’s always an argument that it doesn’t really count. Yet you just said the computer is doing what you did as a junior developer.