I want to learn to launch and grow a paid app, I‘ve built a few SaaS and installed apps up to a point of „80% ready to launch“, but did not go all the way. I don’t quite know what‘s holding me back but I realized that this is what’s most important for me to learn when it comes to my software side endeavor.
AI accellerates my learning. It helps me understand, do more experiments and lured me into reading more code. I think AI also helps me be more productive, but I‘m less focussed and sooner exhausted. I also fear its addictive potential which is why I force myself to take breaks more often and try to not use it every day.
AI fundamentally changed the programming experience for me in a positive way, but I‘m glad that it’s not my full time job. I think it can also have bad effects which can not be easily avoided in fulltime roles under market conditions.
Had the same thought. ChatGPT often tells me things like: "This is the hard truth" or "I am telling it to you as it is (no fluff)" or whatever. Just because my initial prompt contains a line about it not making things up and telling me how things are instead of what would please me to hear. I added a line to specifically tell it to not phrase out these things, but it appears to be surprisingly hard to get rid of those phrases.
Yes, fun aspect: I actually think it was written by a human but in a way as if they‘re asking a machine and not other human beings. I feel guilty of doing the same, too, from time to time I feel it‘s a bad direction.
I'm not beating up on OP but I chuckled when I read the question. Literally the only place I see the phrase "no fluff" with any frequency is with Deepseek lol.
Nothing wrong with the phrase itself of course, other than the fact that it's like literally in every other reply for me lol.
Some people act like the use of an LLM immediately invalidates or lowers the value of a piece of content. But the case of a question or simple post, especially by somebody for whom English is second language, using an LLM to rephrase or clean-up some text seems like an innocent and practical use case for LLMs.
I apologize if it sounded like a critique (it did), but I wanted to make an honest observation first and foremost. I think it was written by a human but it sounds Like a prompt. I believe I changed my use of language, too, but I dont like the direction for human to human communication.
Labor "power" is paid, thus hours put in mostly. Hard to compare with AI. Simpler and fairer (for a start): Tax capital gains as soon as assets are used as collateral for loans.
I had one and they Let me walk the next day to other diagnostics, had about 6 months severe headaches afterwards which only were bearable when lying down flat. Glad it went away, finally. If I remember correcly you should stay in bed for 48h after the procedure.
Yes, the possibility of severe and prolonged headaches are part of my consent for this procedure. That said, I'm usually only performing the procedure to help exclude (or confirm) a medical condition with risk of permanent disability or death, so it can be a tough decision at times.
That OpenAI is now apparantly striving to become the next big app layer company could hint at George Hotz being right but only if the bets work out. I‘m glad that there is competition on the frontier labs tier.
I would love to learn more about their challenges as I have been working on an Excel AI add-in for quite some time and have followed Ask Rosie from almost their start.
That they now gone through the whole cycle worries me I‘m too slow as a solo building on the side in these fast paced times.
reply