Just kidding. To me the biggest mistake was the standards' slowness in the early 2010s to provide badly needed new functionality such as a proper replacement for table layout, vertical alignment, etc. The cult of knowing specific tricks to get things working properly carried on for too long and made a lot of people annoyed at the semantic web (rip).
Pix has really spurred up small local businesses. It's so much easier to buy digitally from local stores now, or even just a person starting up a business because it required no setup, no fees or anything.
If I need to buy a gift for someone from a store at the mall, for example, I just text them, they send me pictures with the options, I pay instantly via pix and they send the product through local delivery. The whole thing takes 5 minutes of my time and the purchase shows up on my door in 30 minutes.
I saved on time, gas and parking, and meanwhile the store made a sale through a local employee instead of me buying online from their national franchise for example (if it's a franchise of course). Win win for everyone.
Nope, it's much more insidious than that. The user is already on your website, which could be a legitimate website with a malicious owner.
If you look at the screenshot, it's a perfectly valid interpretation for a non tech-savvy user to interpret that as "realhealthysnacks is asking me to install a legitimate Microsoft application".
Now change the simplified example for a real one from a SaaS product login page with several "Login with ..." buttons, and one of them triggers this.
What... does that mean? A website with a malicious owner is illegitimate by definition. :)
But more to the point, this logic is circular. You're saying PWAs are subject to attack by malicious actors because their users can be attacked by websites controlled by malicious owners. Which is... true. But specious, and true of regular web pages and apps and every other kind of software.
I'm not seeing where you're getting anything novel here at all. If you let people run software written by other people you need some kind of protection against people being fooled by bad software. That is obviously a very hard problem with only imperfect solutions. But those solutions do exist, and that protection exists here in PWAs and needs to be evaded, in a form that is entirely analogous to the way you have to validate a web page you're looking at.
The situation is this: You go to some web store. You click "Sign In With Microsoft" (or Google, or Facebook, etc.). You expect the site to be able to know your Microsoft/Google/Facebook email address. You don't expect the site to be able to take over your entire Microsoft/Google/Facebook account.
So it's a site you trust enough to use, but you don't trust it enough to give it control over your other accounts. This phishing attack gives it control over your other accounts.
I don't understand the answers to your "what is a legitimate website with a malicious owner" question, but I kinda see this as the same concern as downloading a phone app that requests an OAuth login via a native webview. You can't always see the true URL of that login page. But it comes back to what I think is your main point -- you've already downloaded something malicious from the get-go. But I guess there's some damage control if you can spot a fake login page and remove the install.
Yeah it still needs a malicious person to run the attack of course, but it's a different attack vector. Phishing consists of making the user believe they are in a different website than they are at.
Most of the time, that requires a convincingly-looking URL to redirect from website A to the phishing page. (e.g. micr0softlogin.com)
This attack doesn't require that, it all stays in the website A which they user may find legitimate. (or it could be a legitimate one that has been compromised)
Another aspect of this is that PWAs have a helpful anti-phishing feature which actually displays a URL bar when you navigate to a different domain. Which is entirely twisted by this because by staying in website A that's exactly when the URL bar will be hidden, letting the attacker to place a fake one there.
But agreed that there are only imperfect solutions to this sort of thing.
microsoft.com is legitimate website.
The owner of microsoft.com however get your browsing history, reboot your PC during weekends when your rendering is almost complete, put random adware on your PC without asking you, injects adware into various websites, i'm too lazy to list all the rest but you get the picture. Legitimate website with a malicious owner.
The point about the newer Bialettis being cheaper is absolutely true. My mother has an old (>10 years) Moka that feels heavy and sturdy. A couple of years ago, after accidentally leaving it on the stove for too long, the bottom chamber and the filter basket got a permanent burnt coffee taste, and we bought a new one to replace it. That one was lighter and came with a significant thinner filter basket, which I also attributed to either being counterfeit or just they shipping cheaper versions of the product to Brazil.
Then, a couple of months ago, I was on vacation in Italy and decided to get a brand new one as a gift, directly from an official Bialetti store. To my surprise, the Mokas in the store felt exactly like the lower-quality one we had bought in Brazil. I didn't even buy the gift.
You see a similar thing with the glass chemex. There is a healthy market on eBay for pre-1980 models that were hand blown from great quality glass. My wife got me one for a gift and the difference in feel is remarkable. New ones feel fragile, this feels solid and strong. I’ve broken two of the newer versions and it’s dangerous. The old one is still standing strong in my house, surviving trips in my RV, and generally doing it’s job. I’m sure I sound like a curmudgeon but they don’t make stuff like they used to
I am aware of those, but from what I understand the pre-80s ones are still better. I’m sure given a little time someone who is chemex expert will chime in and give a definitive comment.
James Hoffman gives a good history of Bialetti here: https://www.youtube.com/watch?v=upgQsA5kLAk including their sale to Faema and later to a cookware manufacturer, their numerous brushes with bankruptcy, and the stiff competition they've gotten from... even in Italy... pod machines.
Bialetti has been on its last leg since 2015. They're drowning in debt and (in the best case scenario) are headed toward restructuring soon; in any case their future's not looking bright.
I'm not sure about the details. IMHO a contributing factor is that they're a company historically centered around manufacturing nearly indestructible appliances (even the newer mokas, however flimsier, don't break easily); once the market was saturated with the flagship product, there's only so much profit they could squeeze out of selling accessories and the like.
I don't know if this is the ideal takeaway. Otherwise, we should all prioritize making our products flimsier and more expensive.
I think it's just a straightforward failure of creativity — or, even more plainly, a failure to understand their customer. They had a great product, which led to a loyal following — why not expand into adjacent markets?
The cruel irony is that their product was related to something that's extremely disposable. Why not get into the business of coffee beans? Why not partner with interesting coffee growers? Subscription businesses have been huge for decades, now — why not offer consumers the ability to buy an espresso bean subscription to go with their Mokapot, thereby generating a reliable recurring revenue stream?
A glimpse at their Wikipedia page [1] suggests they never even tried to branch out from the small, comfortable niche of cookware.
> Otherwise, we should all prioritize making our products flimsier and more expensive.
It's the view of many that this is indeed what most companies prioritize — I'm not saying it's true, but it doesn't seem to be a particularly fringe opinion. It's in the vein of enshittification.
Also, might I ask how you inserted that em-dash? A keyboard shortcut? It's interesting to see fancy typography online.
One can look up the utf8 character for different typographical characters and copy and paste them in. On macOS at least, there is a keyboard shortcut for "emojis" (Cntl+Cmd+Space) and a little window shows up where you can search for emojis by name, and typographical characters by name (such as "em dash"). —pjh
Bring up the keyboard viewer widget on macOS and it will show you a live preview of what each key is... you can hold down the modifiers to see how the keys change.
Brand protection may be one reason. Branding with a supplier who messes up in the coffee bean arena would hurt their reputation. Some business will throw ideas out and see what fits and maybe get a second really good product. Getting another product and being successful is hard enough which is why you see larger companies buying out others. It’s easier to buy and get a successful product from and existing than branching out and hitting another success.
But even if the market is saturated... is it really? I'm just an armchair expert in this case, but as far as I'm aware, there's not a coffee maker yet in every house; they will eventually break if overheated for example (I broke mine's rubber seal by putting it on the stove without water); and there's plenty of untapped markets out there yet.
That said, one person may buy one coffee maker and never need another one for the next decade or two, or pass it on to their children if it's really good. So what another comment said, them expanding into selling coffee as well for example, sounds like an idea.
If the rubber seal is damaged, you can simply replace it, you don't need to buy a new coffee maker. When people mention that newer coffee makers are cheaper and lower-quality, that's probably Bialetti trying to reach a larger market. I guess lots of people bought one and rarely if ever used it (I think even I have one somewhere, but don't ask me where).
As for expanding into selling coffee: that's natural for systems like Senseo or Nespresso, where you have custom pads/pods which you insert into the machine (not sure if third-party pad/pod makers have to pay a license fee to the "system provider"?), but Bialetti coffee makers work with any ground coffee. Might be an idea nevertheless, but not sure about the odds of success - I imagine most Italians already have their favorite brand of coffee and wouldn't suddenly switch to "Bialetti Coffee" just because they say it works best with their machines.
Not really, it remains true even when you take competition into account. Also, the rest of the industry isn't really playing the same game: all other companies producing moka pots are vastly more diversified. You have e.g. Alessi which fills the design-oriented niche (and has tons of other products), or the countless crummy knock-off factories which churn out all sorts of trash and just happen to machine moka pots once in a while. But only Bialetti kept all its eggs in one basket (at least until it was too late).
their quality went way downhill and they shipped production overseas. I would love to have a nice bialetti moka express or a stainless version. i bought one probs 8 or 9 yrs ago, and it wasn't long before the tin lining separated, and the overall quality was crap. nothing like my mothers or any others I had used when i was introduced to them in italy. so i bought an alessi stainless steel made in italy pot at like 5x the price, and never looked back. but do i think a well built bialetti would be as good or better, they just dont make them well anymore…
I see a lot of Bialetti-branded cookware at local supermarkets. Things like nonstick frying pans. I actually bought one on a trip once when I needed a pan, and it's pretty good.
For one thing, things like Aeropress, Nespresso/Keurig, and cheaper/better home espresso and bean-to-cup coffee machines came along. People don't necessarily need to muck around with a Moka pot to make quality coffee at home any more.
And if you really do want a Moka pot, there's a lot of cheaper Chinese competitors now that look the part and actually have reasonable quality.
Has Nintendo ever talked about how they do software development? Can we all drop the thousands of books that have been written about software engineering in general and just figure out what they do?
Game aside, the reviews have been pointing out how the game performs well (after day 1 patch) and is not pestered with bugs, which is an impressive feat for such an open world game where most things are able to interact with everything else.
It 100% has to do with retention of key talent and knowledge transfer. It seems the model for most western studios is to make one or two big successful games, then layoff all the staff and/or be acquired by EA/Activision/Microsoft. Then their next games flounder as they're milked dry. Western companies are only worried about the next quarter and treat talent as a bottom line expense.
I guess it's not only about what the studios do. Japanese also have a different kind of loyalty from employees which rarely ever change change. It's probably 20 years of average tenure, compared to 2 years in the US. And that's not because these companies pay so much more: They probably pay less for their most experienced stuff than what an employee with 2 years of experience gets in the US. It's just a different culture.
There is no big secret to this. They just don't go all out. They don't take big risks. They just see what works elsewhere and polish it until it shines brighter than everything else. And they only add (or take away?) until they have the necessary minimum of gameplay. For example, Zelda BotW is by far not the best survival or crafting-game which was around at release, but it was the most pleasant experience for casual gamers and Zelda-fans, because it left out all the unimportant grind which is not relevant for a Zelda-Game.
Notable in that regard: Apple did the same under Steve Jobs. Focus on the important part, and don't play around.
I would invert that question and ask what, to you, would qualify as being sufficiently new that it doesn’t have? To me, there are tons of things. The death / checkpoint system, weapon durability, the massive non-linear open world, the recipe system, the puzzle dungeons, the fact that, if you want to, you can essentially go challenge the final boss immediately. If these things aren’t new enough, I have to wonder what is.
Most of those things have been fairly common elements of games for decades? For example "puzzle dungeons" is just a staple of Zelda as a franchise, there's a whole genre of games designed around being able to rush the final boss as quickly as possible even though there's plenty of other content to explore (Metroidvanias), and I can't even think of anything particularly remarkable about BotW's death/checkpoint system other than that it's fairly generous with the autosaves.
My point is these are new for a Zelda game, and they've put them together in a complete package in a way that's not been done before. If the bar is "something no game has ever done before," it's rare that anything in life is ever going to reach that bar.
Even if you take a completely different profession, like music, revolutionary artists still have their influences and build on instruments and techniques that are 99.9% the same. It's not like they are suddenly playing flutes made out of loaves of bread. And even if they were, most of the time those sorts of things just come across as gimmicks to me.
I'm surprised to see "Metroidvania" described as a genre where you can rush the boss quickly. Neither of the two defining games that name the genre, (Super) Metroid or Castlevania: Symphony of the Night, allow you to reach the final boss without having explored a substantial amount of the world map and collected the majority of available upgrades. Both games do have low% and any% speedruns that skip a lot of stuff, but those require the use of glitches. Do you have an example of a game you're thinking of?
Neither low% nor any% speedruns of Super Metroid or Castlevania: SotN require the use of glitches. From personal experience, Super Metroid is beatable with less than 20% completion without using any glitches at all. Sequence breaking is not a glitch (which is an actual bug).
But also Hollow Knight, Salt and Sanctuary and Axiom Verge are some pre-BotW games that I've personally played where you can rush the credits without experiencing a significant portion of the game once you've gotten out of the early game.
shout out to Chrono Trigger and Super Mario World which, while not metroidvanias, have the same "rush the final objective once you can with minimum exploration" vibe that many of them have.
I see what you mean, but in order to win Super Metroid you still have to beat all the bosses to open Tourian, and the same with the five bosses you need to beat to get to Dracula in SOTN. In BOTW, once you're off the Plateau, you can literally walk directly to Ganon. And, like you mention, being able to skip a substantial chunk of the game to get to the final boss is as present (if not more so) in other genres; it's been in Mario since the NES!
I would put a lot of it down to Japanese craftsmanship coupled with relatively experienced engineers (average age in their Kyoto office is 40 IIRC). Their selection process is notoriously rigorous too and goes far beyond the usual LeetCode questions you'd get at a FAANG company.
I disagree. Nintendo has good engineers but so does many of the other studios. For me what sets Nintendo apart is not their code or technology, but their game design and game direction. The way they seem to craft their game-play and game mechanics to have everything it needs but nothing more, and then couple it with the perfect match for game aesthetics with unmatched consistency.
This is a Japanese company so most of their engineers will be hired directly from university and typically stay on until retirement. Based on what I've heard they build large groups of engineers who'll stay together more or less permanently. Then they'll rotate these groups between different projects. Sometimes they'll be on a game, other times they might be doing something with hardware. So the groups end up multidisciplinary.
> I would put a lot of it down to Japanese craftsmanship
This is such an orientalist and borderline racist view it’s crazy. If it were true then it would also imply the other japanese game devs also affected by it. There countless bad games from Japan, don’t even have to walk far from Nintendo just look at the Pokémon games and how GameFreak release them with 0 optimization. And then the countless misses from Square Enix, Bandai Namco etc.
No one minds attributing the wild spunk and ambition of USA startups to “American exceptionalism. I think it’s pretty reasonable to say that a certain trait is broadly associated with a culture without implying EVERYONE in that culture has to exemplify it.
Grow up man, this is some elementary stuff that you shouldn’t need explained to you.
I think you're missing the point. My family in Japan, and their social/business networks, are exactly like this. Detail oriented and loyal to a fault. While they are not software engineers the expectation is to do your best. Some of it is done through company policies or implied in a social context.
That is not true for all Japanese people but it is true of a large majority. I have first hand experience.
Japanese are also extremely attached to their past and culture, which shows up in the game how its almost like a modern mythical representation of both old japanese myths and an obsession with technology.
There, is that racist too?
The result is beautiful, I love the whole "ancient technology" concept in it, which is not something we have in our world (except for pyramids perhaps).
the director aoji aonuma has to be the oldest looking salaryman i've ever seen. i think that says a lot.
long hours, very high level of expectations, fine tuned attention to good gameplay design, and the protection of higher ups execs like miyamoto from bean counters.
I don't think it is racist to highlight an aspect of a culture and how it might at a group dynamic level causally influence the outcome of something.
The claim that Japanese culture causally leads to better craftmanship might be wrong. As you've mentioned, there are plenty of counterexamples that argue against Japanese culture having the claimed causal influence. But this doesn't make the original claim racist nor even borderline racist.
It might be worth listening to the Acquired podcast episode on Nintendo. They are very far from perfect, and have had a good number of serious failures, along with some very strange decisions that clearly hurt them. Nintendo has a die hard stance on modding that is definitely net negative. Just a few weeks ago they went after some giant Twitch streamers for playing modded content. They also consistently ship technology that is generations behind.
One big thing they pointed out is the type of gaming they target. While the Playstation and Xbox general aim for very serious, high "skill" players, Nintendo often launches just above the seriousness and skill level of mobile gamers. It's easy for me to sit down with my extended family and play Mario Party or Mario Kart, but they'd hate me if I had them play Elden Ring. They also are strongly against much of the free to play content.
I left that episode questioning how much of Nintendo's recent success is due to them outcompeting versus the competition making a series of unforced errors.
I think they have a couple advantages:
1) Zelda on Switch has very basic graphics compared to modern AAA titles on other consoles and only a 30 fps framerate.
2) The last Zelda game came out in 2017 so they've had tons of time.
I know people say this, and of course the art style is stylised. But playing it last night and watching the day-night cycle, the grass moving around on the first sky island, the physics simulation as I dropped my clumsily-made creations and I thought it was pretty impressive. I do think it's a wonder this runs on a low power tablet computer from 2017 but if I open up Teams on a brand new machine it can be a laggy mess. I do think we have lost track of how much computing power we have and how poorly it is used.
> I do think we have lost track of how much computing power we have and how poorly it is used.
I think this is the real lesson to take away from the Switch. The device is woefully underpowered, but developers know there's a huge market out there so they just have to Make It Work. The end result is that many games, especially first party ones, are super well optimized for their hardware. You could see this with the 3DS too, the things those 285MHz could pull off were definitely very impressive.
On PC and other amd64 platforms there's so much raw CPU and GPU compute available that it's possible to get away with performance impacts. Doom Eternal is one of the few well-optimized big games that just seems to play well on any device with a GPU you throw at it. Compare that to some recent releases and you really wonder how bad things must've gotten.
Of course, highly optimized game development takes time, effort, and skill, and that doesn't come cheap. As long as gamers accept the inefficiencies on other platforms, games will continue to be released in a subpar state. Nintendo cares more about the quality and reputation of their brand (in some areas) than it does about making money so it goes the extra mile; I doubt EA or Bethesda care as much as long as they keep making money.
See, pretty visuals / moods don't need high performance hardware, it's what you do with the tools given to you. I'm sure plenty of kids have just sat in Minecraft for a while watching their world go by, and it's using 1x1m blocks and 16x16 textures.
I wish there were better crossplatform native desktop development environments. Teams made / is making the switch to React at the moment, but it's still a web application.
I've recently gotten into the early access program for Beeper, which promises to be a native cross-channel chat app connecting things like Slack, Teams, Whatsapp etc into a native app. I like the native app part, but the downside of one-app-for-all is that it's lowest common denominator in terms of features and visually it looks like none of the other apps. Still uses 130-140 MB of memory at the moment though.
It would be like figuring out what Steve Job, Usain Bolt, or Killian Jornet do. It would be interesting and helpful, but you will not be able to replicate it by following a recipe.
I have tried to figure this out myself and found two facts that stood out:
1. On BOTW game designers did not allow polishing within two thirds of the game dev process
2. The executives do a lot of play testing.
To implement both at the same time is quite something if you ask me.
Easy, with proper algorithms and data structures thinking about a single kind of hardware, instead of developing on a octacore with 32 GB and SSD with a RTX GPU and then expecting everyone else has the same setup.
Basically by doing development like we used to do in the 8 and 16 bit home computer days.
Just binge watched this today after I saw your comment and it is fantastic. Good banter along with a guided sense of direction towards their goal of completing their challenges and reaching their end city.
What I really like about Wordle is that the very reason that all of these fun and interesting math analysis are being done on it is due to the entire corpus being easily accessible there in the code. It's not something kept away server-side. It's so easy to cheat on it that there's no point in doing so.
In addition, if it was an ad-filled, high growth commercial venture, this approach would likely never have been chosen. (Not to mention the entire game mechanics of one word per day, instead of trying to show you a stream of ads every minute or so after each word you would have played).
I guess what I wanted to say is: thanks, open web!
I'm afraid to tell you that you like something for the wrong reasons.
The reason justification you just provided is wrong. People aren't digging through because they can, they're digging through because they like the game and it sits in a perfect state of simple, popular, fun where people want to see the curtains behind the game.
This would still be reverse engineered if it had to be decompiled first.
Your second statement kind of proves that you don't believe in your first statement, and in fact it is definitely the simplicity, popularity and fun that comes from the game that spawned intrigue.
What you really wanted to say was: "thanks, fun games!" because this has nothing to do with the game being open.
I disagree, the openness of the implementation does help. People have different skills - not everyone doing these statistical analyses would be able to decompile a game, say.
Flappy Bird and Desert Golfing are both really simple, and both were viral successes; but neither was directly analysed so rapidly and so openly, because it’s much more difficult to do that on iOS.
That's the thing. More people are going to analyse it basically if it's open, but they wanted to analyse it either way and if it were closed, they would have found the several people who did spend the work decompiling and analysing certain data or reverse engineering it and use their findings.
It's kind of made easier, but not started by open code.
Hmm, I don't entirely disagree, but friction is a real thing and can make a massive difference in whether something becomes a viral hit or not.
If something is a huge success it's likely to be hacked, sure. But being open and hackable can contribute to its success, so if it's closed, it might be less likely to be a success in the first place. In which case nobody will bother hacking it.
A long time ago in Internet time, Justin Frankel (creator of Winamp) created a tool for live music collaboration that approached this problem in a very different way. Basically what it does is that it _adds_ delay for everyone in a way that synchronizes the musical measures, such that you play a measure while listening to everyone else's previous measure.
I never tried it because I'm not a musician (just a longtime fan of Justin and Winamp), but I always found the concept very interesting. Apparently it is still alive: https://www.cockos.com/ninjam/
Bingo this is the answer. You can't "speed up" the speed of light, but you can have predetermined latency with a synchronized clock that lets you do lots of great things.
Other ideas (this is not new btw), you don't get to hear the other participants but you're all on the same sync'ed clock with a metronome and trust the general output will be ok. Final mix is synced to viewers (obviously on delay to achieve sync).
Most studio recordings are done iteratively, where you record a rough "scratch" track but then one by one, record over each part so that the final recording is the sum of everyone playing their best. This combines the feeling of playing with a group asynchronously with producing a high-quality recording. Would be cool to see tools to make this easier although the current gen of DAWs is pretty good.
It's worth watching some interviews with artist like Jack White to get their take on the quantization of music like this. It's not always the best way to make a track. And then when you get o concert If the artists can't bring it on stage then it's a bit of a letdown.
Latency is the most crucial issue for audio when doing live broadcasts, especially when the peers all need to be synchronized to a rhythm or pulse.
However, it doesn't stop there. Latency is also important for voice, not just music; excess latency is also the cause of the dreaded zoom fatigue [0].
We humans seem to have brains designed for particular cadences of conversation, and products like Zoom really work to disrupt & disengage these preferences we have, leading to poor communications outcomes.
I wonder if over time, we will learn to adapt to the new cadence as we continue to socialize over platforms like zoom. My hunch is that we will, it will just take time for our minds to adapt to the new medium.
We'll likely fix the latency issues inherent to the way we do live broadcasting on the web today before the human race has the time needed to adapt to the new cadence.
Our vocal communication was solidified into our genetic code over millions of years of iteration. I imagine that in a decade or less, the latency issue will be fixed for most of the connected world.
I'm not sure i understand: if the cello is delayed so I can feel like I play my trumpet in sync, then the cello player can't also feel he'd in sync with the trumpet, he must be ahead.
So there is some sort of hierarchy of instruments or a dumb sync track pre-recorded.
I'm not a musician, but I interpreted it like this:
Lets say the delay is 15 seconds, you hear the composite, including your part delayed, and you "think, cool, I played the right thing at the right time, I'll keep going with my same assumptions" or you hear "my part was way off, but the rest sounded decent enough, better do something different."
The part where everyone is just playing terribly isn't a concern or doesn't manifest because you've got a bunch of intermediate or better musicians playing.
You're right, but most speakerphones still aren't even properly full duplex, much less utopian zoom software for apex cyber virtual orchestras.
The amount of times I'm tripping up on my own words because the last thing I said is blaring out a friends/family mobile speakerphone and then back into the mic is disappointing on all levels, to say the least.
I think it works only if everyone is playing the same chord/scale. Kinda like someone playing with a loop pedal. You're right that it definitely wouldn't work with a piece of music.
I've spent most of 2020 building something similar. Now I just wish I had a landing page ready so I can plug it here instead of over engineering the app itself, but oh well :)
Anyway, what I'm building is meant more for the repetitive kinds of electronic music, but I solved the problem by just making it work like a shared loop pedal that records up to 16 measures of audio.
Everyone works asynchronously and can add or remove audio on their own time, but loops get synced to other players as they get recorded.
Best part? You can do cool stuff with browsers nowadays (even record uncompressed audio). So it just needs a web browser.
No comment that mentions Justin Frankel and Winamp is complete without mentioning Reaper. Software written with true passion - that's what Winamp was and that's what Reaper is today.
We tackled this kind of problem with voice systems on the server side, record all streams simultaneously, sync and merge them and stream the recording. Downside, there will be a delay, but it is a fixed delay and its determined by the worst connection. ( this was around 1995 )
The thing that boggles my mind is that the only affected users are:
(a) - Firefox users &&
(b) - who downloaded their messaging history on a buried menu option in the account page &&
(c) - in the last 7 days prior to disclosure &&
(d) - who did this on a computer where someone else has access
The number of affected people is presumably very small, and the only metric that twitter can't know here is (d). How on earth does it make sense to alert every Firefox user with a scary wall of text? Don't they have logs to cross-reference (a), (b) and (c) and e-mail these users?
I'd believe that if there's only one API endpoint that would be crucial to log to protect against major leaks, it would be this one to download all your history at once...
re (b) Twitter says "took actions like downloading your Twitter data archive or sending or receiving media via Direct Message,", so it isn't just downloading the archive, and to me "actions like" suggests there might be more than the ones named here explicitly. And they might not be able to tell who did it for all these things.
I also didn't see any notice, but I don't know if I missed it, my adblocker ate it, or if they actually did only inform users that did one of the at-risk things and I happened to not do so.
It looks better than you think it will. Nothing like the tack sharp backgrounds in a smartphone portrait. But not as melty as a longer lens. Ken Rockwell's [0] sample pictures are representative. His site doesn't allow hot links though. So to see a cropped portrait scroll to the fourth sample.
Just kidding. To me the biggest mistake was the standards' slowness in the early 2010s to provide badly needed new functionality such as a proper replacement for table layout, vertical alignment, etc. The cult of knowing specific tricks to get things working properly carried on for too long and made a lot of people annoyed at the semantic web (rip).