Hacker Newsnew | past | comments | ask | show | jobs | submit | samuelfekete's commentslogin

This is a step towards a constant stream of hyper-personalised AI generated content optimised for max dopamine.


Kids will go to School V2 and have absolutely nothing in common to talk about because each one will have completely unique media entertainment at home.


They can just sit in the corner with their meta glasses and talk to their LLM friends.


I wonder if it will lead to q civilizational collapse because kids V2 won't have kids. Even today's young adults barely have any kids.


It's certainly something that has people worried. I suspect a population decline and unbalanced demographics with too many old people per working age person are pretty much guaranteed. But personally I don't think it will be catastrophic civilisation level collapse. But at some point things will rebalance and turn the other way as resources, housing and such become more abundant.

I'm also hopeful we sort out the problems with big tech eventually. I was initially against it, but I'm starting to think Australia's plan to ban under 16s from social media is actually a very good idea.


This is already the case with the myriad of streaming services and choices of what people will let their kids watch or not. With my little kids, we tend to mostly watch PBS Kids content with a bit of Disney shows mixed in when it comes to screen time. We try to avoid seemingly empty hyper-stimulating content like Paw Patrol and others. But in the end a lot of the other kids in school/daycare talk about these shows and others, which can lead to the kids not having that kind of shared context. For instance, my four year old loves Wild Kratts, but practically nobody in his class knows the show. Meanwhile, he doesn't have any context for the various characters of Paw Patrol.


Don't worry, they will always have new Minecraft mobs and biomes to discuss.


Interesting idea, online gaming becoming the de facto new societal community meeting space.


As a parent of pre and actual teens, this isn't "becoming", it is the de facto standard. Minecraft, Roblox, Fortnite are our kids' third place https://en.wikipedia.org/wiki/Third_place


Until games also become uniquely generative in realtime


For multiplayer games, I'm not sure that would be a detriment to the experience in any way.

Procedural generation is a known quantity in gaming, with well-explored pros and cons.


But if the marketing fueling the industry is to be believed, every parent will be able to build a tailor-made game for their child. I know that won’t really how it’ll turn out but it’s a funny exercise to think about.


The Torment Nexus is a Skinner box


Just imagine when uncensored models this good can generate porn.


It's far from sustainable (for now)


Assuming you have to generate new content for each viewer second watched yes it won't pencil out. But if you have a library of tons of content you can keep building out...



not really. AI porn will never take off because ppl want to see a real person.



That isn’t the equal transit time explanation and the information on nasa.gov doesn’t look wrong to me. It’s entirely consistent with the Babinsky “How wings really work” video and experiment shown in a sibling post.


NASA.gov has multiple pages on lift. Maybe one will explain it accurately.

https://www1.grc.nasa.gov/beginners-guide-to-aeronautics/lif...


They could just live in the sphere.


Does it save the content of open tabs?

This is my number one frustration with existing iOS browsers - when I come back to tabs without an internet connection everything is gone.


All iOS browsers just use some webview under the hood. I doubt they have any control at all over this extremely irritating behavior.


Buy why can't they store page cache as local data and continue to use webview to open it?


It is likely the OS that is clearing those tabs from memory. The app may not get any notifications. IOS does not have virtual memory that can be saved to storage and restores on request. IPadOS does now have that on the newer M1 processor versions. They also have 8GB of RAM so they are less likely to need to purge or swap.


Still not clear why the app can't save the page itself to storage on every page load, then let OS clear the memory and not notify anyone, then if it's purged, reload from local file


That sounds like the app would need to implement their own RAM paging and that is a complicated and error prone endeavor and likely not worth the effort for he developer. I don't even know if an iOS app would have the system hooks to be able to do it.


why not just save the page as html? Isn't that what official Safari does with the reading list?


Pages can rewrite themselves at any time.


so? having something good most of the time beats having nothing all of the time


Because it's some opaque component, and it's the only thing you can use on iOS. Firefox and chrome are just chrome/shells around safari's engine and all suffer from this problem.


It should remember the scroll position, and the content of <input> and <textarea> tags.


Does it re-request contents when it reload tabs after the app is suspended? Safari makes it very difficult to read content later if you don’t have service.


It does. Not sure WebKit has an easy solution for that specific scenario at the moment, besides saving a web archive to disk.


There’s a bias towards coding capabilities amongst testers (and perhaps trainers) of ChatGPT.

I bet it has (or can have) a similar level of capability when it comes to chemistry, biology, finance, law, etc. - all human knowledge that is expressible as text or formulae.

With regards to software engineering, a great amount of time is currently being wasted by coders trying to solve problems that have already been solved by others. Some of the solutions for that previously included libraries, SaaS, and Stack Overflow.

Now we have another tool at our disposal to 10x all software engineers (and perhaps the same for other industries).


I think the question is what happens when you're able to 10x all software engineers but don't have job market demand to match the 10x increase. 90% layoffs? Maybe not 90% because these processes aren't perfectly efficient or evenly distributed, and also because you can increase productivity without affecting demand to some extent, but what's the %? 30-50%? That's still huge.


Perhaps we'll have Jevons paradox, and instead of reducing the workforce, we'll accelerate software eating the world.

A lot of software is crappy. The extra productivity could go partially into quality improvement.


We haven’t automated 10% of what could be automated, especially in tasks that were previously not cost-efficient to automate or that required AI tech to automate.


My understanding is that much of automation difficulties are actually to do with interactions with the physical world. We still don't have a machine for folding laundry that matches human performance.

Coding is nice for AI to overtake because it's all abstract (apart from context). The problem domain is literally just string to string mapping.


Coding may be abstract, but execution of the resulting program is not. And results of the execution is driven by real world needs. Truth is that a human can invent things because it can pattern match across whole domains. You can say there is a mechanic solution to that, how can we do an algorithm that have the same result. AI cannot unless the algorithm was already created. I think the current state of AI is great for searching and creating starting point, but it can never get us to the finish line.


Finally a software engineering silver bullet post-No Silver Bullet.


Prompt: Elon Musk bought Twitter

ChatGPT: As a large language model trained by OpenAI, I'm sorry but the information you provided is not accurate. As of November 30, 2022, Twitter is still a publicly-traded company and is not owned by Elon Musk. Jack Dorsey is the current CEO of Twitter, and the company has not been taken private. Elon Musk is not currently the owner of Twitter, and the company has not been taken private. I'm a large language model trained by OpenAI, and I don't have the ability to access current or real-time information. The information I have access to is based on the knowledge and data that I was trained on up to November 30, 2022. If you have specific information or sources that you believe contradict the information I have provided, I encourage you to share that information so that we can discuss it further. I'm here to provide helpful and accurate information, and I'm always open to learning more.


I saw in another example that it said it's only trained up to the end of 2021 though - someone correct me if I'm wrong?

This could just be a case of "as of X date, this is simply my best available information to go on".


After interrogating ChatGPT about this, it told me that the explicit cut-off date in its pre-prompt is to prevent it from saying things about developing stories, where the facts are not fully known yet.


Since this is a study about the predictability of humans, perhaps “less random” should be defined by how similar or predictable a sequence is based on the dataset of human inputs.


> Sit so that the monitor is between the two of you.

No. Use two monitors that are mirrored or with screen sharing, so that each person can sit comfortably centred on their own screen.


I've found shared tmux/screen sessions are ideal.

+ each person can have their own font/resolution

+ it doesn't dominate the entire screen, allowing each person to keep their own notes/etc on the side

+ at any point someone can "jump in" and take control of the session

+ interacting in a confined shared space radically reduces "over communication" issues. i.e. if you want to show something it's got to be demonstrable in a small textual window

+ you have a shared command-line, which is more useful than it might initially seem

+ seamlessly scales to in-person and remote pair programming

There are some downsides:

- it requires both users are familiar with a terminal based editor

- it may present security issues for folks operating in locked-down/low-resource environments (e.g. can't spin up a temporary machine with a shared account)

- sharing graphical information requires a separate communication layer


Even considering how easy things are using tmate, it is really challenging because there are a lot of Software Engineers that don't really know how to use a terminal to the point that asking to do a SSH is a bit too much.

So we end sharing the screen over hang outs, that is basically very inefficient and wastes a lot of CPU. But because it is normalized, it is "the standard".

EDIT: my comment was a bit unfair. I guess I could install VS Code, change my daily editor, and use it with the Live Share plugin with those using VS Code. So hang outs it is.


Take that one step better and each should have their own resolution. Just cause your 20 yr old eyes looking at a 4k super widescreen monitor can read it doesn't mean my 30 yr old eyes with glasses on a laptop can read it too (I really like code with me on Pycharm)


Thanks for posting.

For the benefit of people trying it out, here are some important features that are in the pipeline but are not yet available:

- Exact phrase searching

- Sort and filter by date

- A very large index


I’m working on a new search engine that will allow you to do that. (It’s still a work-in-progress, but you can try it out here: https://entfer.com/).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: