I logged in to the platform, not sharing the names myself, but basically:
* company was registered in Latvia in 2010 (e.g. included in VAT register over here)
* board has 1 member since 2009, registered in Russia (Russian passport)
* has 1 shareholder, Ascensio System Limited in the UK (05718967)
* has one beneficial owner, in 2023 updated data from Russia to Turkey (passport issued in Istanbul)
In 2024 their turnover was short of 3 million EUR, seems like profit wise in 2024 they're 1 million EUR in the red. Also not sure if the site is busted, but shows the number of employees as 1.
So yeah, the company is registered over here, seems like they're trying to distance themselves from Russia for obvious reasons. Not sure why the downvotes for the parent comment, that's probably nice to mention.
Following that up in the UK companies registry, the director of Ascensio System Limited started using a service address in London since May 2025. The same filing, however, notes that his usual residential address has remained unchanged, and appears to be in Nizhny Novgorod, Russia
The beneficial owner is Onlyoffice Capital Group Pte. Ltd in Singapore.
It all seems surprisingly murky - you'd normally expect a relatively small organisation to have a more straightforward structure, even allowing for its international nature.
Playing devil's advocate for a second. It might be easier for a single person company to open up a bunch of legal entities in different places where taxes etc are more favorable. In the Russian case guy might just be wanting to be able to accept payments. Or maybe he's making sure he has somewhere to go in case of trouble. I would be very unsurprised if he took advantage of the "$250K real estate purchase gets you full citizenship and you can even rent or sell the place" scheme of Turkey to live there.
It seems overkill for what is actually a pretty tiny company - I doubt they would be big enough to trigger those sort of incentives, at least for the UK and Singaporean entities (Latvia or Turkey might, I suppose, be different - but then why bother routing it through the UK?).
I'd guess that the happy case is going to be that, yes, this structure was forced on them as a by-product of sanctions or similar negative trade policies. But I'd be worried that the software business is actually a front for something else, which would suggest that OnlyOffice might be more vulnerable to changes in legal climate than most other projects of that size.
They might have one official employee but there are a bunch of people active on their github. They might be contractors or employees of a different related company.
Feels like a complex situation to me - I like that there's all sort of software out there, esp. if there are improvements to UI/UX, but also I think that the OpenDocument formats are nice to support when possible and push as defaults (while they unfortunately might be a bit confusing to casual users who are used to MS ones).
I tried OnlyOffice and it seemed okay, even if I daily drive LibreOffice most of the time. It's nice that it's open source, but I also understand the people who are cautious about stuff coming out from Russia - I wouldn't hate on software for being developed by people from there, but it also presents an obvious risk.
> Now days, there is much less need for that because a charge lasts much longer, and if you do run low you can fast change in 30 minutes or so. Not buying extra spare batteries for every device means less e-waste, not more!
My current iPhone's battery capacity is already starting to decrease and it was never great to begin with (needed it for work). If it was replaceable, I'd do what I used to with Android phones years ago - get a spare, if the old one is really bad or turning into a pillow, then recycle that and keep using the replacement, otherwise could use both side by side and didn't even need a separate charging bank.
Lots of people will look in the direction of getting a new phone altogether, I might have to do that as well, turning the whole phone into e-waste, instead of giving it 5 more years of lifetime.
And it's not just that - most phones will throttle themselves on a deteriorated battery to limit current spikes that could cause brownouts. So not only your otherwise perfectly fine phone doesn't keep charge for long anymore, it literally becomes slower as it ages just because of its battery.
All iPhone batteries are replaceable. And since the iPhone 16 or so, they’ve already improved the design to make it compliant with the EU battery regulation.
It’s the Apple Watch, AirPods, etc that are more of a concern...
> However usually code is easy to change, so defaulting to "just merge it" and creating followup tasks is often the cheaper approach than infinite review cycles.
I wish this was the "default" mindset everywhere, especially in those cases where you have that one colleague that loves to nitpick everything and doesn't see an issue with holding up both releases and wasting your time over menial pedantic stuff. It would be so much easier to merge working code and let it work, and keep those TODOs in the backlog (e.g. trash).
In a sane world, code review would be like:
1. Will this work and not break anything? We checked it, it's okay. There are no apparent critical or serious issues here.
2. Here's a list of stuff that you can change if you wish, here's why it might be an improvement.
3. Okay, we have some left-over nice to haves, let's keep track of those for later (or not) and merge.
It gets infinitely worse if you're working on 3 projects in parallel and the person wants long winded calls or starts nitpicking about naming, or wants things done exactly their way as if it's the only way (doubly worse if their way is actually worse by most metrics and tastes).
I don’t know whether the free version of Nginx has a Relying Party Implementation, but I have used this plugin for Apache2 and OIDC in the past: https://github.com/OpenIDC/mod_auth_openidc
I know it’s not just OAuth but OIDC had a pretty decent provider support and I could even self-host a Keycloak instance - it was annoying to setup but worked okay in practice, could define my own users and then just get a decent login page when needed and otherwise just got into the sites I wanted.
Personally though, it felt a bit overkill when compared to basicauth for anything not run in public or for a lot of users.
> Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
I don't think you can really get any sort of a signal on this?
Nobody is all that sensitive to the amount of features that get shipped in any project, and nobody really perceives how many people or how much time was needed to ship anything. As a user, unless that means a 5x difference in price of some service, you don't really see or care about any of that - and even if there were savings on the part of any developer/company, they'd probably just pocket the difference. Similarly, if there's a product or service that exists thanks to vibe coding and wouldn't have existed otherwise, you probably don't know that particular detail.
Even when fuckups and bugs do happen, there's also no signal whether it's explicitly due to AI (or whether people are scapegoating it), or just management pushing features nobody wants and enshittifying products and entire industries for their own gain.
Quite possibly cause software engineering feels like tofu dreg construction all of the way down - it's a bunch of suits pushing devs to make features with ever changing technologies and practices where the framework/technology/approach of the year/month/week eats up all of the focus and nobody ever establishes proper good baselines and standards of what "good code" is and instead the nerds argue ad infinitum about a bunch of subjective stuff while drowning in accidental complexity, made worse by microservices, AI slop and chasing after zero downtime instead of zero bugs. It's bad incentives all the way down. On the other end of the spectrum, you have codebases that perhaps should have taken advantage of some of the newfound wisdom of the past 40 years, but instead they're written in COBOL or FORTRAN and the last devs who know the tech are literally dying out.
There's nigh infinite combinations of tech stacks out there and because corpos literally won't incentivize people to not job hop, you don't really get that many specialists with 20 years of experience in a given technology that at least have a chance at catching the stuff that formal code analysis and other tooling didn't because nobody cares that much about validating correctness past saying "Yeah, obviously you should have some test coverage." To give an example, whoever came up with the idea of wiring up the internals of your app at runtime on startup instead of during compilation, a la the majority of Spring and Spring Boot, should go to jail. And everyone who made dynamic languages as well. And whoever pushed the idea that there should only be a loose contract between the networked parts of a system (e.g. not something MORE correct than SOAP).
Put everyone in jail for daring to be employed in that shitshow: devs, execs and the tech vendors as well, for not prioritizing the code correctness like you would in a spaceship (aside from Ariane 5) or a plane (aside from MCAS) or proper financial systems (aside from Knight Capital) or CPUs (aside from the Pentium FDIV bug). Sure, there plenty of proper engineering out there, but my experience makes me view the claim that we should treat software like "real engineering" as a sick joke, when so much of the stuff I've seen and used isn't, about the same confusion that you'd get when you'd suggest that 100% code coverage is something that you should do if you're serious, though obviously that would make you never ship and we can't have that. Software is like the Wild West except people pretend to be serious, some days it feels like the only winning move is not to play (and to starve).
Sorry about the rant, pissed off at the status quo and the state of the industry, it feels like building a house of cards, except some of the cards aren't even rectangular. They wasted millions in my country to make a not working e-health system, for a country of like 2 million people. I'm not surprised in the slightest that breaches and fuckups will happen with the large orgs too aplenty. It's absurd, the world we live in.
> take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.
I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.
I got the Max subscription and have been using Opus 4.6 since, the model is way above pretty much everything else I've tried for dev work and while I'd love for Anthropic to let me (easily) work on making a hostable server-side solution for parallel tasks without having to go the API key route and not have to pay per token, I will say that the Claude Code desktop app (more convenient than the TUI one) gets me most of the way there too.
I started using it last week and it’s been great. Uses git worktrees, experimental feature (spotlight) allows you to quickly check changes from different agents.
I hope the Claude app will add similar features soon
Instead of having my computer be the one running Claude Code and executing tasks, I might want to prefer to offload it to my other homelab servers to execute agents for me, working pretty much like traditional CI/CD, though with LLMs working on various tasks in Docker containers, each on either the same or different codebases, each having their own branches/worktrees, submitting pull/merge requests in a self-hosted Gitea/GitLab instance or whatever.
However, you're not supposed to really use it with your Claude Max subscription, but instead use an API key, where you pay per token (which doesn't seem nearly as affordable, compared to the Max plan, nobody would probably mind if I run it on homelab servers, but if I put it on work servers for a bit, technically I'd be in breach of the rules):
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
It just feels a tad more hacky than just copying an API key when you use the API directly, there is stuff like https://github.com/anthropics/claude-code/issues/21765 but also "claude setup-token" (which you probably don't want to use all that much, given the lifetime?)
> Us having to specify things that we would never specify when talking to a human.
The first time I read that question I got confused: what kind of question is that? Why is it being asked? It should be obvious that you need your car to wash it. The fact that it is being asked in my mind implies that there is an additional factor/complication to make asking it worthwhile, but I have no idea what. Is the car already at the car wash and the person wants to get there? Or do they want to idk get some cleaning supplies from there and wash it at home? It didn't really parse in my brain.
I would say, the proper response to this question is not "walk, blablablah" but rather "What do you mean? You need to drive your car to have it washed. Did I miss anything?"
Yes, this is what irks me about all the chatbots, and the chat interface as a whole. It is a chat-like UX without a chat-like experience. Like you are talking to a loquacious autist about their favorite topic every time.
Just ask me a clarifying question before going into your huge pitch. Chats are a back & forth. You don’t need to give me a response 10x longer than my initial question. Etc
People offing themselves because their lover convinced them it's time is absolutely not worth the extra addiction potential. We even witnessed this happen with OAI.
It's a fast track to public disdain and heavy handed government regulation.
Regulation would be preferable for OpenAI to the tort lawyers. In general the LLM companies should want regulation because the alternative is tort, product liability tort, and contract law.
There is no way without the protections that could be afforded by regulation to offer such wide-ranging uses of the product without also accepting significant liability. If the range of "foreseeable misuse" is very broad and deep, so is the possible liability. If your marketing says that the bot is your lawyer, doctor, therapist, and spouse in one package, how is one to say that the company can escape all the comprehensive duties that attach to those social roles. Courts will weigh the tiny and inconspicuous disclaimers against the very large and loud marketing claims.
The companies could protect themselves in ways not unlike the ways in which the banking industry protects itself by replacing generic duties with ones defined by statute and regulation. Unless that happens, lawyers will loot the shareholders.
Yes because that is how regulations really work and what the purpose is. In practice all companies both tiny and massive do everything that they can to use the state to quash competition and to reduce the risks of litigation.
Software in general has been subject to light touches in part because most of the damage that software can really cause is economic and not personal injury. The lines blur when the companies release products that cause mental injuries to users that courts interpret as physical injuries; or if the software reasonably contributes to someone e.g. going crazy and killing another person.
No one would seriously think of holding Microsoft liable if a kidnapper uses Word to draft a ransom note. But if CoPilot tells you to microwave a baby and you do it, many judges will want to take a close look at the operation of that software service irrespective of voluminous contract disclaimers. The only way the Microsofts of the world can escape that type of liability is with comprehensive regulation.
Or sama is just waiting to premium subscription gate companions in some adult content package as he has hinted something along these lines may be forthcoming. Maybe tie it in with the hardware device Ive is working on. Some sort of hellscape tamogotchi.
Recall:
"As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote in the Oct.
I'm struggling a bit when it comes to wording this with social decorum, but how long do we reckon it takes until there's AI powered adult toys? There's a market opportunity that i do not want to see being fulfilled, ever..
I did work on a supervised fine-tuning project for one of the major providers a while back, and the documentation for the project was exceedingly clear about the extent to which they would not tolerate the model responding as if it was a person.
Some of the labs might be less worried about this, but they're not by any means homogenous.
With ChatGPT, at least, you can tell the bot to work that way using [persistent] Custom Instructions, if that's what you want. These aren't obeyed perfectly (none of the instructions are, AFAICT), but they do influence behavior.
A person can even hammer out an unstructured list of behavioral gripes, tell the bot to organize them into instructional prose, have it ask clarifying questions and revise based on answers, and produce directions for integrating them as Custom Instructions.
From then on, it will invisibly read these instructions into context at the beginning of each new chat.
Mold it and steer it to be how you want it to be.
(My own bot tends to be very dry, terse, non-presumptuous, pragmatic, and profane. It's been years now since it has uttered an affirmation like "That's a great idea!" or "Wow! My circuits are positively buzzing with the genius I'm seeing here!" or produced a tangential dissertation in response to a simple question. But sometimes it does come back with functional questions, or phrasing like "That shit will never work. Here's why.")
This is a great point, because when you ask it (Claude) if it has any questions, it often turns out it has lots of good ones! But it doesn't ask them unless you ask.
You can define "ponder" in multiple ways, but really this is why thinking models exist - they turn over the prompt multiple times and iterate on responses to get to a better end result.
Well I chose the word “ponder” carefully, given the fact that I have a specific goal of contributing to this debate productively. A goal that I decided upon after careful reflection over a few years of reading articles and internet commentary, and how it may affect my career, and the patterns I’ve seen emerge in this industry. And I did that all patiently. You could say my context window was infinite, only defined by when I stop breathing.
That is to say, all of that activity I listed is activity I’m confident generative AI is not capable of, fundamentally.
Like I said in a cousin comment, we can build Frankenstein algorithms and heuristics on top of generative AI but every indication I’ve seen is that that’s not sufficient for intelligence in terms of emergent complexity.
Imagine if we had put the same efforts towards neural networks, or even the abacus. “If I create this feedback loop, and interpret the results in this way, …”
Probably the lack of external stimuli. Generative AI only continues generating when prompted. You can play games with agents and feedback loops but the fundamental unit of generative AI is prompt-based. That doesn’t seem, to me, to be a sufficient model for intelligence that would be capable of “pondering”.
My take is that an artificial model of true intelligence will only be achieved through emergent complexity, not through Frankenstein algorithms and heuristics built on generative AI.
Generative AI does itself have emergent complexity, but I’m bearish that if we would even hook it up to a full human sensory input network it would be anything more than a 21st century reverse mechanical Turk.
Edit: tl;dr Emergent complexity is a necessary but insufficient criteria for intelligence
You can get it to ask you clarifying questions just by telling it to. And then you usually just get a bunch of questions asking you to clarify things that are entirely obvious, and it quickly turns into a waste of time.
The only time I find that approach helpful is when I'm asking it to produce a function from a complicated English description I give it where I have a hunch that there are some edge cases that I haven't specified that will turn out to be important. And it might give me a list of five or eight questions back that force me to think more deeply, and wind up being important decisions that ensure the code is more correct for my purposes.
But honestly that's pretty rare. So I tell it to do that in those cases, but I wouldn't want it as a default. Especially because, even in the complex cases like I describe, sometimes you just want to see what it outputs before trying to refine it around edge cases and hidden assumptions.
Google Gemini often gives an overly lengthy response, and then at the end asks a question. But the question seems designed to move on to some unnecessary next step, possibly to keep me engaged and continue conversing, rather than seeking any clarification on the original question.
This is a topic that I’ve always found rather curious, especially among this kind of tech/coding community that really should be more attuned to the necessity of specificity and accuracy. There seems to be a base set of assumptions that are intrinsic to and a component of ethnicities and cultures, the things one can assume one “wouldn’t never specify when talking to a human [of one’s own ethnicity and culture].”
It’s similar to the challenge that foreigners have with cultural references and idioms and figurative speech a culture has a mental model of.
In this case, I think what is missing are a set of assumptions based on logic, e.g., when stating that someone wants to do something, it assumes that all required necessary components will be available, accompany the subject, etc.
I see this example as really not all that different than a meme that was common among I think the 80s and 90s, that people would forget buying batteries for Christmas toys even though it was clear they would be needed for an electronic toy. People failed that basic test too, and those were humans.
It is odd how people are reacting to AI not being able to do these kinds of trick questions, while if you posted something similar about how you tricked some foreigners you’d be called racist, or people would laugh if it was some kind of new-guy hazing.
AI is from a different culture and has just arrived here. Maybe we’re should be more generous and humane… most people are not humane though, especially the ones who insist they are.
Frankly, I’m not sure it bodes well for if aliens ever arrive on Earth, how people would respond; and AI is arguably only marginally different than humans, something an alien life that could make it to Earth surely would not be.
AI isn’t “from a different culture”. It doesn’t have culture. Any culture it does have is what it has sucked up from its training data and set in its weights.
There is no need to be “humane” to AI because it possess no humanity. It has no personhood at all. It can’t feel. You can’t be inhumane to something that is literally incapable of feeling.
A blade of grass has more humanity and is more deserving of respect than anything being referred to as AI does.
Aliens might not be received well but it’s going to depend a lot on how they show up.
AI is a “revolution” where the promise is that nobody will have to do meaningless work anymore ( I guess).
The only problem is right now basically everyone has to do work meaningful or “meaningless” because the dominant thinking requires it for human survival. Weird how most people aren’t happy for the thing that is pitched to take away the meager scraps they get under the current regime.
> A blade of grass has more humanity and is more deserving of respect than anything being referred to as AI does.
Emphatically disagree.
Even ignoring the obvious absurdity in this statement by pointing out that an LLM is emulating a human (quite well!) and a blade of grass is not:
I don't trust any human who can interact with something that uses the same method of communication as a human, and for all intents and purposes communicates like a human, and not feel any instinct to treat it with respect.
This is the kind of mindset that leads to dehumanizing other humans. Our brain isn't sophisticated enough to actually compartmentalize that - building the habit that it's right to treat something that talks like a sapient as if it deserves zero respect is going to have negative consequences.
Sure, you can believe it's a just a tool, and consciously let yourself treat it as one. But treat it like an incompetent intern, not a slave.
I think ascribing humanity to to something that isn’t human is far more dehumanizing to actual real life humans than the alternative. You are taking away actual people’s humanity if you’re giving it to anything we call AI.
I am capable of distinguishing between talking to another person and talking to an LLM and I don’t think that is hard to do.
I don’t think there is any other word than delusional to describe someone who thinks LLMs should be treated as humans.
Whether you view the question as nonsensical, the most simple example of a riddle, or even an intentional "gotcha" doesn't really matter. The point is that people are asking the LLMs very complex questions where the details are buried even more than this simple example. The answers they get could be completely incorrect, flawed approaches/solutions/designs, or just mildly misguided advice. People are then taking this output and citing it as proof or even objectively correct. I think there are ton of reasons this could be but a particularly destructive reason is that responses are designed to be convincing.
You _could_ say humans output similar answers to questions, but I think that is being intellectually dishonest. Context, experience, observation, objectivity, and actual intelligence is clearly important and not something the LLM has.
It is increasingly frustrating to me why we cannot just use these tools for what they are good for. We have, yet again, allowed big tech to go balls deep into ham-fisting this technology irresponsibly into every facet of our lives the name of capital. Let us not even go into the finances of this shitshow.
Yeah people are always like "these are just trick questions!" as though the correct mode of use for an LLM is quizzing it on things where the answer is already available. Where LLMs have the greatest potential to steer you wrong is when you ask something where the answer is not obvious, the question might be ill-formed, or the user is incorrectly convinced that something should be possible (or easy) when it isn't. Such cases have a lot more in common with these "nonsensical riddles" than they do with any possible frontier benchmark.
This is especially obvious when viewing the reasoning trace for models like Claude, which often spends a lot of time speculating about the user's "hints" and trying to parse out the intent of the user in asking the question. Essentially, the model I use for LLMs these days is to treat them as very good "test takers" which have limited open book access to a large swathe of the internet. They are trying to ace the test by any means necessary and love to take shortcuts to get there that don't require actual "reasoning" (which burns tokens and increases the context window, decreasing accuracy overall). For example, when asked to read a full paper, focusing on the implications for some particular problem, Claude agents will try to cheat by skimming until they get to a section that feels relevant, then searching directly for some words they read in that section. They will do this even if told explicitly that they must read the whole paper. I assume this is because the vast majority of the time, for the kinds of questions that they are trained on, this sort of behavior maximizes their reward function (though I'm sure I'm getting lots of details wrong about the way frontier models are trained, I find it very unlikely that the kinds of prompts that these agents get very closely resemble data found in the wild on the internet pre-LLMs).
Here's the company info on a Latvian org registry: https://company.lursoft.lv/en/ascensio-system/40103265308
I logged in to the platform, not sharing the names myself, but basically:
In 2024 their turnover was short of 3 million EUR, seems like profit wise in 2024 they're 1 million EUR in the red. Also not sure if the site is busted, but shows the number of employees as 1.So yeah, the company is registered over here, seems like they're trying to distance themselves from Russia for obvious reasons. Not sure why the downvotes for the parent comment, that's probably nice to mention.
reply