Hacker Newsnew | past | comments | ask | show | jobs | submit | Alex4386's commentslogin

"Minecraft: Java Edition" has been obfuscated since the release. < Classic Microsoft move.

No, It was obfuscated since around 1.8 when you (Microsoft) buy up Mojang Studios. before that? meh, It wasn't. That's the main reason why JE has broader mod ecosystem from the start., result being 1.7.2 being the one of the most active modded versions since most of them can't get passed to around 1.8.

The motive behind this is probably due to them finding out people can not get their mods/server software updated in-time (due to extra work required) and this leading people being really reluctant to update their versions.


I learned to code by modding Minecraft, starting at ~1.6 a few years before the Microsoft acquisition.

It was definitely already obfuscated by then, the Microsoft acquisition had nothing to do with it.

If anything, looking back all the years, Microsoft has largely delivered on the promise to not fuck up the game and its community. They’ve mostly kept their hands off it, besides the Microsoft account stuff (which makes sense why they did it, but a lot of people are still understandably annoyed). Hell, they’ve kept up two separate codebases in fairly close feature parity for _years_. I doubt they’d have kept JE if there weren’t people in that team who genuinely cared.


Minecraft has been obfuscate since the start. Even 1.7 is still obfuscated.


> No, It was obfuscated since around 1.8 when you (Microsoft) buy up Mojang Studios. before that? meh, It wasn't.

Huh? This is not true. The very first version released in 2009 was obfuscated with ProGuard, the same obfuscator used today.

The reason Minecraft 1.7 was a popular version for modding was because Forge was taking too long to come out, and the block model system was changed in a fundamental way in the next update. Has nothing to do with obfuscation.

> The motive behind this is probably due to them finding out people can not get their mods/server software updated in-time (due to extra work required) and this leading people being really reluctant to update their versions.

Not really accurate. The Minecraft modding and custom server software ecosystem has more agility right now than it ever had in the past. In the past 5 years, a remarkable shift has occurred: people have actually started updating their game & targeting the latest version. Minecraft 1.21 has the highest number of mods in the history of the game.


1.7.10 is definitely obfuscated, and 1.7 had one of the longest "lifespans" of a Minecraft version.

The best thing to happen to Minecraft is 1.7.10 backporting; the second best thing has been breaking the Forge monopoly on modding.

(The code quality of mods back in the 1.7 days ranges from "pretty decent" to "absolutely horrendous" mind you.)


This is deliberate misinformation.

You can easily see that versions prior to Beta 1.8 were obfuscated just by downloading the .jar for the older versions on minecraft.wiki.

You can even view some of the old MCP mappings here: https://archive.org/details/minecraftcoderpack


> This is deliberate misinformation.

It’s disinformation if it’s deliberate.


Anti-american party doing casual "america bad" speeches huh. Btw, its their fault for not filing proper VISAs. I don't get why there is all the nation-wide fiasco about LG Energy Solution incorrectly filing VISAs and asking government for help for their screw ups.

*Note for anyone on Hackernews: hani is well-known as left-wing news outlet in Korea. Think about CNN, and take with grain of salt.


Congratulations, You just reinvented age/identity verification in South Korea!

Now you are one step closer for creating government-id based tracking landscape just like in S.Korea


lol - the US already has it; we just subcontracted it to FAANG!


Well, some bots even spoof User-Agents, requesting tons of requests without proper rate-limiting (looking at you, ByteSpider)

No fair plays done by people, even before the LLMs, so we get the PoW challenge on everywhere.

And what is that conclusion? since Adblockers are used by anywhere, it is OK to corporates not to license them directly and just yank them and put it into curation service? especially without ads? that's a licensing issue. the author allowed you to view the article if you provide them monetary support (i.e. ads), they didn't allow you to reproduce and republish the work by default.

also calling browser itself as reproducing? Yes, the data might be copied in memory (but I wouldn't call it as reproducing material, more like transfer from the server to another), but redistribution is the main point here.

It's like saying well, "the part of the variable is replicated to register from the L2 cache, so whole file on DRAM can be authorized to reproduce", Your point of calling "it's reproducing and should not be reproduced in first place" can't be prevented unless you bring non-turing computers that doesn't use active memory.


The only reason you can say "looking at you ByteSpider" is that it identifies itself. In 2025, that qualifies it as a nice bot.

The nasty bots make a single access from an IP, and don't use it again (for your server), and are disguised to look like a browser hit out of the blue with few identifying marks.


tl;dr: If you are not directly affecting the "sales" of the product, you are good to go. But It seems perplexity did, and (as they might call it) directly trying to compete as a news source

Personally, About their news service, Their news summarization is kinda misleading with AI hallucination in some places.


no. its BOTH attribution AND license violation.


yes, but sublicensing to even permissive ("free-er") license (GPLv3+ to Apache2.0) is a violation of license.

GPL is supposed to viral, if you are using project adopted that, you are taking the risk with it. If you are just changing the license and took the code, that's wrong and need to get an attention. If anyone could go just yoink and relicense the GPL code to other permissive license was "legal", the https://gpl-violations.org wouldn't exist in the first place (i.e. you can just take the linux kernel code and rename it something like "mynux", redistribute in bsd-3 clause and "don't distribute the derivative part").


People really should stop calling a glorified openAI API as an open-source software.


There are several free alternatives to OpenAI that use the same API; which would make it possible to substitute OpenAI for one of those models in this extension. At least on paper. There is an open issue on the github repository requesting something like that.

So, it's not as clear cut. The general approach of using LLMs for this is not a bad one; LLMs are pretty good at this stuff.


Yes, but the API at the end is providing the core functionality. Simply swapping out one LLM model for another - let alone by a different company altogether - will completely change the effectiveness and usefulness of the application.


Well, as we see with AI applications like "Leo AI" and "Continue", using a locally run LLM can be fantastic replacements for proprietary offerings.


FWIW I’ve found local models to be essentially useless for coding tasks.


Really? Maybe your models are too small?


The premier open weight models don't even comparatively perform well on the public benchmarks compared to frontier models. And that's assuming at least some degree of benchmark contamination for the open weight models.

While I don't think they're completely useless (though its close), calling them fantastic replacements feels like an egregious overstatement of their value.

EDIT: Also wanted to note that I think this becomes as much an expectations-setting exercise as it is evaluation on raw programming performance. Some people are incredibly impressed by the ability to assist in building simple web apps, others not so much. Experience will vary across that continuum.


Yeah, in my comparing deepseek coder 2 lite (the best coding model I can find that’ll run on my 4090) to Claud sonnet under aider…

Deep seek lite was essentially useless. Too slow and too low quality edits.

I’ve been programming for about 17 years, so the things I want aider to do are a little more specific than building simple web apps. Larger models are just better at it.

I can run the full deepseek coder model on some cloud and probably get very acceptable results, but then it’s no longer local.


Woah woah! Those are fighting words. /s


One would hope, that since the problem these models are trying to solve is language modeling, they would eventually converge around similar capabilities


everyone stands on the shoulders of giants.


Things standing on the shoulders of proprietary giants shouldn't claim to be free software/open source.


Their interfacing software __is__ open source; and, they're asking for your OpenAI api key to operate. I would expect / desire open source code if I were to use that, so I could be sure my api key was only being used for my work, so it's only my work that I'm paying for and it's not been stolen in some way.


My older brother who got me into coding learned to code in Assembly. He doesn't really consider most of my work writing in high level languages to be "coding". So maybe there's something here. But if I had to get into the underlying structure, I could. I do wonder whether the same can be said for people who just kludge together a bunch of APIs that produce magical result sets.


  > But if I had to get into the underlying structure, I could.
How do you propose to get into the underlying structure of the OpenAPI API? Breach their network and steal their code and models? I don't understand what you're arguing.


> How do you propose to get into the underlying structure of the OpenAPI API?

The fact that you can’t is the point of the comment. You could get into the underlying structure of other things, like the C interpreter of a scripting language.


But what about the microcode inside the CPU?


That tends to not be open source, and people don’t claim that it is.


I think the relevant analogy here would be to run a local model. There are several tools to easily run local models for a local API. I run a 70b finetune with some tool use locally on our farm, and it is accessible to all users as a local openAI alternative. For most applications it is adequate and data stays on the campus area network.


A more accurate analogy would be, are you capable of finding and correcting errors in the model at the neural level if necessary? Do you have an accurate mental picture of how it performs its tasks, in a way that allows you to predictably control its output, if not actually modify it? If not, you're mostly smashing very expensive matchbox cars together, rather than doing anything resembling programming.


As an ancient imbedded system programmer, I feel your frustration… but I think that it’s misguided. LLMs are not “computers”. They are a statistics driven tool for navigating human written (and graphical) culture.

It just so happens to be that a lot of useful stuff is in that box, and LLMs are handy at bringing it out in context. Getting them to “think” is tricky, and it’s best to remember that what you are really doing is trying to get them to talk as if they were thinking.

It sure as heck isn’t programming lol.

Also, it’s useful to keep in mind that “hallucinations “ are not malfunctions. If you were to change parameters to eliminate hallucinations, you would lose the majority of the unusual usefulness of the tool, its ability to synthesise and recombine ideas in statistically plausible (but otherwise random) ways. It’s almost like imagination. People imagine goofy shit all the time too.

At any rate, using agentic scripting you can get it to follow a kind of plan, and it can get pretty close to an actual “train of thought”facsimile for some kinds of tasks.

There are some really solid use cases, actually, but I’d say mostly they aren’t the ones trying to get LLMs to replace higher level tasks. They are actually really good at doing rote menial things. The best LLMs apps are going to be the boring ones.


I think the argument is that stitching things together at a high level is not really coding. A bit of a no true scotsmen perspective. The example is that anything more abstract than assembly is not even true coding, let alone creating a wrapper layer around an LLM


This stuff is starting to enter debian as well -_-'


Plan is to add local LLM support so goal is fully OSS, agree initial wording could have been better.


Nah, Hunminjungeum (The initial blueprint), which initially designed for phonetic alphabet did support differentiating those phonemes.

It's just until early 1900s that a "nationalist" (SiKyeong Ju) decided to use the "phonetic alphabet" into a primary writing system and revamped the system to be focused on words to be identifiable, rather than following the pronunciation. hangul now is just like equivalent of using a modded Phonetics alphabet that made each words identifiable. It's not representing the actual pronunciation, nor the actual pronunciation of the word. That's also the reason why the hanja (= kanji or "Chinese characters" or whatever your country calls it) was around until late 90s.

If you think Korean grammar is crazy, It's all thanks to him.


> hanja (= kanji or "Chinese characters" or whatever your country calls it)

(Also, for reference, we say "Chinese characters" because that's what every country calls them. Hanja is just the Korean reading of 漢字. Kanji is the Japanese reading of 漢字. 漢字 means "Chinese characters".)


I can't tell what you're saying "nah" about. The comment I responded to said that, to a Western ear, [pap̚] vs [bap̚] is a "very big difference" that isn't noted by the Korean writing system, because Korean cares more about aspiration [of stop consonants, presumably] than voicing. It was in English.

I observed that English shares that quality with Korean, and English speakers are not even able to hear the difference between [pap̚] and [bap̚], making this an odd choice to exhibit as a "very big difference" which will, by its lack of representation in the writing system, confuse English-speaking Westerners. It's a difference they can't perceive; why would they be confused over the fact that two identical sounds are both written the same way?


Well, That's only after late 90s. Before that, hanja (equivalent of kanji in Japanese) was used in news, official documents and etc. alongside the hangul. You need to know hanja to say out what it read


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: