Single text file a year in n++. I Try to do it once a week and do end of month and end of year analysis. Usually I end up writing more and that's fine.
My requirements are local only and fast.
Start with the simplest tool you have available and go from there. If it becomes a habit and you have certain pain points then you can always switch. But trying to find the PerfectTool_TM before you're even journaling feels like putting the cart before the horse.
The world is a vastly easier place to live in when you're knowledgeable. Being knowledgeable opens doors that you didn't even know existed. If you're both using the same AGI tool, being knowledgeable allows you to solve problems within your domain better and faster than an amateur. You can describe your problems with more depth and take into considerations various pros and cons.
You're also assuming that AGI will help you or us. It could just as easily only help a select group of people and I'd argue that this is the most likely outcome. If it does help everybody and brings us to a new age, then the only reason to learn will be for learning's sake. Even if AI makes the perfect novel, you as a consumer still have to read it, process it and understand it. The more you know the more you can appreciate it.
But right now, we're not there. And even if you think it's only 5-10y away instead of 100+, it's better to learn now so you can leverage the dominant tool better than your competition.
The talk of "safety" and harm in every image or language model release is getting quite boring and repetitive. The reasons why it's there is obvious and there are known workarounds yet the majority of conversations seems to be dominated by it. There's very little discussion regarding the actual technology and I'm aware of the irony of mentioning this. Really wish I could filter out these sorts of posts.
Hopefuly it dies down soon but I doubt it. At least we don't have to hear garbage about
"WHy doEs opEn ai hAve oPEn iN thE namE iF ThEY aReN'T oPEN SoURCe"
I hope the safety conversation doesn't die. The societal effects of these technologies are quite large, and we should be okay with creating the space to acknowledge and talk about the good and the bad, and what we're doing to mitigate the negative effects.
In any case, even though it's repetitive, there exists someone out there on the Interwebs who will discover that information for the first time today (or whenever the release is), and such disclosures are valuable. My favorite relevant XKCD comic: https://xkcd.com/1053/
I get that but it just overshadows the technical stuff in nearly every post. And it's just low hanging fruit to have a discussion over. But you're probably right with that comic, I spend so much time reading about ai stuff.
> Whether they compete with a cloud AI time will tell.
But they're not competing with cloud AI. Why would a person need to go to the cloud to give you a reminder or download an app?
They're competing against the current local assistant, Siri.
Large models are great but they can't fit on 8 or 16gb of ram. And that's a very big deal.
They don't need to put all of the world's information locally, just the relevant bits. It doesn't need to know every celebrity's full history for example
You can have the basic stuff on-device with the "smarts" of an LLM that can have conversations with the user and have context to previous questions.
The other stuff can be fetched from the cloud (with the user's permission OFC) and optionally saved locally.
Did you only read the title? Because the abstract gives you a pretty good idea of what they mean when they say reason. It's pretty easy to understand. No need to immediately call bullshit just because of a minor semantic disagreement.
>ThEY DON'T tHiNk. They'rE JuSt STochAStiC pARrotS. It'S not ReAL AGi.
It doesn't even matter if these claims are true or not. They're missing the point of the conversation and the paper. Reason is a perfectly valid word to use. So is think. If you ask it a question and then follow up with 'think carefully' or 'explain carefully'. You'll get the same response.
inb4 AcTUALLy LlMS Can'T do aNYtHIng CaRefUlly BECaUse pRogRAms ARen'T caRefUl
Do you know something we don't? If so, don't be shy and share it with the class.
More seriously, only time will tell if today's event will have any significance. Even if OpenAI somehow goes bankrupt, given enough time, I doubt the history books will talk about its decline. Instead they would talk about its beginning, on how they were the first to introduce LLMs to the world, the catalyst of a new era.
Ah yes, because all those normal people will be able to run these powerful models on the devices that they currently own. Such a naive take.
The rich will ALWAYS get their piece of the pie, and once they've had their fill, we'll be left fighting for the crumbs and thanking them for their generosity.
AI won't solve world hunger, it will make millions of people jobless. It won't stop wars, it will be used as a tool for the elite to spread propaganda. The problems that plague society today are ones that technology (that has existed for decades) can fix but greed prevents it.
> If we were attempting to put someone into some sort of Matrix like reality simulator but we lacked the technology to provide a perfect simulation what level of simulation would be 'good enough' that a human would consider it reality and be able to develop into something we could relate to?
Have you tried VR before? You really don't need perfect simulation to be fooled. Good enough is already here, albeit for a short amount of time.
This is such a strange comment and is so removed from my lived experience that I question whether its even real. Where is this indoctrination happening? The vast majority of my classes were dedicated to learning the subject and there was rarely any room for deviation. Especially not deviation that turned to religion or politics. Even when discussing religious texts in my philosophy classes, it was through the lens of an academic. The same way that we discussed plato and aristotle. There was open discussion and if the conversation turned aggressive or wasn't going anywhere it was quickly shut down. We all had a curriculum to follow and certain debates can be taken anywhere or happen outside of class.
If you wanted to challenge your professors, it was about your grade and that was usually done during office hours. Professors are humans and they all have their quirks but I didn't experience anything out of the norm while talking to them or hearing about their interaction with others.
As for challenging each other, I think we did more learning from each other than anything else and our opinions did change over time but that's not indoctrination, just part of growing up.
You say you went to university 20 years ago but did you end up going back afterwards? If so, what are the differences you've personally noticed?
My requirements are local only and fast.
Start with the simplest tool you have available and go from there. If it becomes a habit and you have certain pain points then you can always switch. But trying to find the PerfectTool_TM before you're even journaling feels like putting the cart before the horse.