Hacker Newsnew | past | comments | ask | show | jobs | submit | evandrofisico's commentslogin

I know that hn is heavily populated by people from the USA, that the author is dutch but a non-english language would be... every other language beside English.

Commenting on the actual text, his solution for the cedilla is awkward and is one of the first things I disable on any computer, because it is a extremely common letter in portuguese.


I agree that typing curly braces and square brackets on non English keyboards suck, but typing non English languages on an English keyboard sucks too. I made the opposite of your decision: I brought a laptop with my national keyboard and I switch to US layout when programming. That has had a curious effect on me: as my editor and my terminals have black backgrounds and everything else has a white one, something in my brain makes my fingers reach for keys according to the color of the background. I make many mistakes when I attempt to program in a white window (eg: type in gedit) or write in my language in a black one (eg: an md file in terminal.)


In particular, for anybody wondering, the non-english languages they refer to (wrt to the layout they talk about) are

> English (of course), Danish, Dutch, Finnish, French, German, Italian, Norwegian, Portuguese, Spanish and Swedish

so basically all using some variation of the latin alphabet.


WPA2-entreprise and WPA3 both have certificate chains checking exactly to avoid such attacks


Hmm. Are you sure that your stack wouldn't accept these discovery packets until after you've successfully authenticated (which is what those chains are for) ?

Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common sense - why not have a single network?

But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?


I may be missing something, but it is almost a guarantee that you would not receive a RA in this scenario? eduroam is using WPA2/WPA3 enterprise, so my understanding is that until you authenticate to the network you do not have L2 network access.

Additionally, eduroam uses certificate auth baked into the provisioning profile to ensure you are authenticating using your organizations IdP. (There are some interesting caveats to this statement that they discuss in https://datatracker.ietf.org/doc/html/rfc7593#section-7.1.1 and the mitigation is the usage of Private CAs for cert signing).


People keep reinventing LaTeX, but poorly. Most of the issues described have already been solved by it at least 20 years ago, especially the semantics part. The tooling is mature, well understood and supported on all operating systems.


As far as custom shortforms for fully tagged angle-bracket markup is concerned, people are reinventing SGML which can handle markdown and other custom syntaxes since 1986.


I've been meaning to see how close I can come to Markdown syntax using SGML's SHORTREF and perhaps architectural forms.


Markdown inline syntax is straightforward to capture using SGML SHORTREF. What's more difficult (impossible) are things such as reference links where a markdown processor is supposed to pull text (the title of a link) from wherever it's defined before or after its usage.

Haven't heard about archforms in a while ;) but it's not a technique for custom syntax, and since markdown is specified as a Wiki syntax with canonical mapping to HTML, there's no need for the kind of simplistic element and token renaming possible with archforms.


I've use archforms in my custom markup before: https://r6.ca/HtmlAsSgml.html

For example added an <nbsp> attribute to turn all spaces into non-breaking spaces, and used archforms to remove the attribute afterwards.

But yeah, maybe for Makrdown you don't need archforms. On the other hand, perhaps there is some super clever way to use archforms to get your reference links working.


Most of the AI products are not for the end user, they are just signaling shareholders and possible investors that the company is on the hype.


There is also mechanism inside Google that rewards teams that launches new products, more than the teams that actually maintain existing ones.


This kind of cynicism is wild to me. Of course most AI products (and products in general) are for end users. Especially for a company like Google--they need to do everything they can to win the AI wars, and that means winning adoption for their AI models.


http://killedbygoogle.com/ - most Google products are for the temporary career advancement of some exec or product lead.

Their only real product is advertising, everything else is a pretense to capture users attention and behaviors that they can auction off.


This is different. AI is an existential threat to Google. I've almost stopped using Google entirely since ChatGPT came out. Why search for a list of webpages which might have the answer to your question and then manually read them one at a time when I can instead just ask an AI to tell me the answer?

If Google doesn't adapt, they could easily be dead in a decade.


That's funny. I stopped using ChatGPT completely and use Gemini to search, because it actually integrates with Google nicely as opposed to ChatGPT which for some reason messes up sometimes (likely due to being blocked by websites while no one dares block Google's crawler lest they be wiped off the face of the internet), and for coding, it's Claude (and maybe now Gemini for that as well). I see no need to use any other LLMs these days. Sometimes I test out the open source ones like DeepSeek or Kimi but those are just as a curiosity.


If web-pages don't contain the answer, the AI likely won't either. But the AI will confidently tell me "the answer" anyway. I've had atrocious issues with wrong or straight up invented information that I must search up every single claim it makes on a website.

My primary workflow is asking AI questions vaguely to see if it successfully explains information I already know or starts to guess. My average context length of a chat is around 3 messages, since I create new chats with a rephrased version of the question to avoid the context poison. Asking three separate instances the same question in slightly different way regularly gives me 2 different answers.

This is still faster than my old approach of finding a dry ground source like a standards document, book, reference, or datasheet, and chewing through it for everything. Now I can sift through 50 secondary sources for the same information much faster because the AI gives me hunches and keywords to google. But I will not take a single claim for an AI seriously without a link to something that says the same thing.


Given how embracing AI is an imperative in tech companies, "a link to something" is likely to be a product of LLM-assisted writing itself. Entire concept of checking through the internet becomes more and more recursive with every passing moment.


Google is still looking for investors?


Of course, Alphabet exists to give returns to their shareholders.


Anthropic is more secretive about their costs, Ed Zitron is right now investigating their costs, specifically on GCP


Sure he is


And self sustained nuclear fusion is 20 years away, perpetually. On which evidence can he affirm a timeline for AGI when we can barely define intelligence?


And a program that can write, sound and paint like a human was 20 years away perpetually as well, until it wasn't.


Another way to put it is that it writes, sounds and paints as the Internet's most average user.

If you train it on a bunch of paintings whose quality ranges from a toddler's painting to Picasso's, it's not going to make one that's better than Picasso's, it's going to output something more comparable to the most average painting it was trained on. If you then adjust your training data to only include world's best paintings ever since we began to paint, the outcome is going to improve, but it'll just be another better-than-human-average painting. If you then leave it running 24/7, it'll churn out a bunch of better-than-human-average paintings, but there's still an easily-identifiable ceiling it won't go above.

An oracle that always returns the most average answer certainly has its use cases, but it's fundamentally opposed to the idea of superintelligence.


> Another way to put it is that it writes, sounds and paints as the Internet's most average user.

Yes, I agree, it's not high quality stuff it produces exactly, unless the person using it already is an expert and could produce high quality stuff without it too.

But there is no denying it that those things were regarded as "far-near future maybe" for a long time, until some people put the right pieces together.


This is the key insight I believe. It is inherently unpredictable. There are species that pass the mirror test with a far fewer equivalent number of parameters than large models are using already. Carmack has said something to the effect that about 10ksloc would glue the right existing achictectures together in the right way to make agi, but that it might take decades to stumble on that way, or someone might find it this afternoon.


> Carmack has said something to the effect that about 10ksloc would glue the right existing achictectures together in the right way to make agi

What does he know about that?


Well, he heads a company devoted to creating AGI, so admitting success in research is inherently unpredictable is surprisingly honest. As to whether his estimate that we have the pieces and just need to assemble them correctly is itself correct, I can only say it is as likely to be correct as any other researcher in the field. Which is to say its random.


Is this true? I think it’s equally easy to claim that these phenomena are attributable to aesthetic adaptability in humans, rather than the ability of a machine to act like a human. The machine still doesn’t possess intentionality.

This isn’t a bad thing, and I think LLMs are very impressive. But I do think we’d hesitate to call their behavior human-like if we weren’t predisposed to anthropomorphism.


> like a human

Humans have since adapted to identify content differences and assign lower economic value to content created by programs, i.e. the humans being "impersonated" and "fooled" are themselves evolving in response to imitation.


I'd argue we've had more progress towards fusion than AGI.


> I'd argue we've had more progress towards fusion than AGI.

way more pogress toward fusion than AGI. Uncontrolled runaway fusion reactions were perfected in the 50s (iirc) with the thermonuclear bombs. Controllable fusion reactions have been common for many years. A controllable, self-sustaining, and profitable fusion reaction is all that is left. The goalposts that mark when AGI has been reached haven't even been defined yet.


Yet at the same time "towards" does not equate to "nearing". Relative terms for relative statements. Until there's a light at the end of the tunnel, we don't know how far we've got.


Fusion used to be perpetually 30 years away. We’re making progress!


stop repeating that. first, it isn't true that intelligence is barely defined. https://arxiv.org/abs/0706.3639

second a definition is obviously not a prerequisite as evidenced by natural selection


> stop repeating that. first, it isn't true that intelligence is barely defined. https://arxiv.org/abs/0706.3639

I don't think he should stop, because I think he's right. We lack a definition of intelligence that doesn't do a lot of hand waving.

You linked to a paper with 18 collective definitions, 35 psychologist definitions, and 18 ai researcher definitions of intelligence. And the conclusion of the paper was that they came up with their own definition of intelligence. That is not a definition in my book.

> second a definition is obviously not a prerequisite as evidenced by natural selection

right, we just need a universe, several billions of years and sprinkle some evolution and we'll also get intelligence, maybe.


cool, you can move goalposts and claim no true scotsman would define intelligence that way; in addition you are confusing sufficient and necessary.


An Arxiv paper listing 70 different definitions of intelligence is not the evidence that you seem to think it is.


yes it is


Imagine such a amazing productivity tool, so amazing that you have to force your users into using it. As a person that was just born yesterday, I'm quite sure that the other technologies that are constantly compared to LLMs, like the internet and smartphones certainly must have endured the same adoption barriers, right?


In Brazil we already have a free, instantaneous payment system with a well documented API

https://github.com/bacen/pix-api


In Brasilia, Brazil only main avenues are named and all addresses are also by block, just like in Japan.


It is common misconception, but evolution does not happen at the individual level, but on populations, so a single individual not reproducing is irrelevant, as long as the local population carrying the same genes do successfully reproduce.


Ok how about "an organism reproduces or it doesn't" then?

Evolution still can't be "gamed".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: