AWS won't raise the limits on our new account (we're stuck at 1GB RAM in Lightsail after 2 months, even though we need to launch this month).
Looking at Hetzner or Vultr as alternatives. A few folks mentioned me Infomaniak has great service and uptime, but I haven't heard much about them otherwise.
Anyone used Infomaniak in production? How do they compare to Hetzner/Vultr?
Just curious, what are you building/launching that requires more than 1GB of RAM at launch? 1GB is a lot of memory for most use cases, guessing something involving graphics or maybe simulations? In those cases, dedicated instances with proper hardware will give you enormous performance benefits, FYI.
Both Vultr and Hetzner are solid options, I'd go for Hetzner if I know the users are around Europe or close to it, and I want to run tiny CDN-like nodes myself across the globe. Also, Hetzner if you don't wanna worry about bandwidth costs. Otherwise go for Vultr, they have a lot more locations.
appreciate the advice! Launching a 2D game generator with an editor, and expecting those people to share the games . Not multiplayer yet.
The lightsail instance sometimes just hangs and we have to reboot it when people performing simple action like login or queryng API (we have a simple express / nextjs app)
I haven't checked recently, but previously a Lightsail account was a full AWS account. Tie route 53, app or API gateway, and some instances.
That said, for your use case, you might want the predictability and guarantee of having no "noisy neighbors" on an instance. While most VM providers don't offer that (you have to go to fully dedicated machine), AWS does, so keep that in mind as well.
For BYOL (bring your own hosting labor), Vultr is a lesser known but great choice.
Shameless plug: for those who are interested, I've built https://aidailycheck.com that tracks sentiment around any of the major LLMs. I've created a Claude Code extension where you get the sentiment directly in the CLI.
The average rating (status-mood) is unclear imo. Right now it shows "Struggling (40%)" for ChatGPT and "Mid (15%)" for Claude. This suggests that ChatGPT is doing better than Claude (based on the percentage number), when actually the opposite is the case. And Gemini seems to be doing best right now, but has the same low 15% rating as Claude. I would expect the percentage to be the actual average of ratings, not how many people voted for "Mid".
hell, even local LLMs are better - and I'm talking original mixral-quality here, not the capable models that were released a few days ago.
My workflow now for "Google-grade" queries is to query a LLM and then use search to verify and look up additional information. DDG-grade queries still get handled by duckduckgo.
Google looks like it's circling the drain from where I'm standing.
Thank you for this. I just installed it an it was exactly what I looked for when I bought my Oculus: the opportunity to explore, zoom and learn about the night sky freely in 3d. I tried many others but this has been the only one to fill my expectations.
I now use chatGPT for translating english to traditional Chinese and give me some variations. Definitely works better than Google translate.
For example for "I want to build a good gym routine":
我想建立一個良好的健身計畫。
Wǒ xiǎng jiànlì yīgè liánghǎo de jiànshēn jìhuà.
Here are some variations of the sentence along with their English translations:
我想制定一套有效的健身計劃。
Wǒ xiǎng zhìdìng yī tào yǒuxiào de jiànshēn jìhuà.
I want to create an effective workout plan.
我希望安排一個健康的健身日程。
Wǒ xīwàng ānpái yīgè jiànkāng de jiànshēn rìchéng.
I hope to set up a healthy fitness schedule.
ps: I am building an app to make learning chinese easier. Feel free to ping me privately for testing :)
I also do this with translations between Swedish, Danish and English.
I find it fascinating that chatGPT is better at translating (a lot better) than a tool specifically built for it, when (if I understand correctly) chatGPT was in no way designed to translate, and is only in some way predicting text, one word at a time.
How come google hasn't leveraged better existing tech to make the translations better? Is it too computationally expensive?
The Google translate offline translation datasets are absolutely tiny - like, 20mb in size for French. Obviously this is heavily quantised and so on, but it’s not that surprising that a 30-60+ billion parameter language model outcompetes Google translate handily.
I assume you’re right - that Google translate hasn’t been updated to take advantage of much bigger, more computationally complex models. I suppose for direct translation it’s probably not needed, but being able to ask chatgpt to explain the translation (and any cultural nuances involved) is a game changer when you’re trying to learn a language.
But they could easily be using a larger dataset online?
Which goes two ways: maybe this line of reasoning doesn't mean anything; or well yes exactly, but why so small online when they have all this space and also offer Bard.
They don't charge anywhere near enough to do that, I'd imagine; and likely couldn't at the scale they operate at (I mean, they are even embedded into many apps to help instantly translate banal things like comments). Imagine trying to translate a long news article with a sequence of max-length LLM inferences.
Yeah I'm not actually suggesting it run through Bard/a LLM, I just mean surely small dataset size is a design requirement for space constrained devices' offline translation, it doesn't necessarily mean they use the same datasets online, and if they do.. why, because it seems to be enough?
(It's a bit confusing to talk about because surely it is just an older version of the same sort of thing, it's a less large language model right? I just think it could/should/would be a bit larger in the online hosted version.)
You can check if translations are also better on Bard, that would partly answer that question
I assume the low latency you get from Google Translate is not feasible with current LLMs like ChatGPT. Translate is used to translate sentences on the go, live videos, translate entire web pages ; all of these would be too expensive (and slow) with an LLM... but things might change in the coming months/years as the tech improves.
I live in Taiwan and try to run some beach volleyball games. It's not common here, and not easy to find players. Writing this article[1] helped me to recruit constantly new players (it ranks 1st/2nd in Google).
Sometimes I write about things I want people know about Taiwan like their bike-sharing system[2], semiconductors[3], or simply good food in Taiwan[4].
Sometimes, I write about tech stuff, like kubernetes cpu limits[5] or blockchain consensus[6].
I thought about focusing in a single topic, but when people reaches me out, like today[7] about my food post, it reminds me that it is fine, and make me quite happy that I helped one soul out there.
reply