Zero is a serverless weather globe rendering ECMWF forecast data directly in your browser using WebGPU.
Zero backend. Zero servers. Zero cost.
As climate extremes become more frequent, understanding forecast hazards becomes survival literacy. Zero makes professional ECMWF IFS data accessible without commercial infrastructure — forkable, self-hostable, resilient. Inspired by Cameron Beccario's earth.nullschool.net, which pioneered browser atmospheric visualization.
Happy to discuss implementation details.
Technical highlights:
- No backend - runs entirely client-side
- Native O1280 grid (6.6M points) sampled directly in fragment shaders - no regridding to textures
- HTTP Range requests fetch ~500KB slices from 4-8MB forecast files on S3
- Works offline after first load (Service Worker caching)
- Animated LOD transitions for graticule grid - line density adapts to zoom level
GPU pipeline:
- Binary search in WGSL for irregular Gaussian grid lookup (precomputed LUTs for latitude positions and ring offsets)
- Marching squares compute shader for isobar contours
- Streamline tracing with Rodrigues rotation for wind flow animation
- Fibonacci sphere for uniform seed point distribution (8K-32K wind lines)
- Globe rendered via fullscreen triangle (ray-sphere intersection in fragment shader)
- Sub-3ms frame times on M1
What didn't work:
- Regridding to textures first - too slow for 6.6M points, quality loss from interpolation
- Geometry-based globe mesh - vertex count explosion at high detail
- CPU-side contour generation - latency killed interactivity
Storage: Caches weather data locally for offline use. Can grow to several GB with extended exploration. Use the "nuke" option in settings to clear everything.
Data hosted by Open-Meteo via the AWS Open Data Sponsorship Program — bandwidth is free for everyone.
I learned the hard way, when Claude has 2 conflicting information in Claude.md it tends to ignore both. So, precise language is key, don't use terms like 'object', which may have different meanings in different fields.
Looking at properly aligned buildings I realized school never prepared me into thinking city planner might have been a bronze age job. How come we call mobile phones progress?
There is a reason we name eras after materials - the bronze age, iron age, etc. Currently we're living in the silicon age.
Progress in fundamental materials science tends to unlock whole new technology paradigms.
You can do city planning with sod and stone. Mobile phones, on the other hand, require a nearly incomprehensible level of materials innovation. It is everything from the battery to conductive touch screen glass to plastic casing to silicon microchip... Not to mention all the science of satellites and rockets and radio waves that make them useful...
By the way, the show "Connections" by James Burke is brilliant. A must-watch for any tech curious nerd.
Yeah, so many people like to point at specific inventions and ask why it wasn't done sooner or such, but 99% of the time it was because of a lack of material science that made production near impossible.
It doesn't matter if someone has a PhD in steam engine engineering, if they went back in time to the Roman empire there still would be no steam engines because there are only a handful of examples of accidentally good enough steel in the entire world, which you don't even have a way to identify yet other than buying 10000x extra and spending years testing every sample to find the good stuff, not to mention you need even more of that high quality steel just to make the tools required to cut good steel into a capable boiler design.
If you can't bang something together with wood, stone, and dirt, it requires advanced material science and entire industries behind it to produce and be worth the effort. Yeah a steam powered water pump would be useful to the Romans, but not if it took 5,000 men working for years and dumping endless amounts of money into it to find just the right ore source and smelting procedures just to produce a single engine that only replaces the labor of 50 guys with buckets.
I think it is a little more nuanced than that. Bret Devereaux wrote about steam engines[1] in Roman Empire, and the conclusion is there was no economic stimuli to kickstart steam engine. For the first half-century (or so) of steam engines they were atmospheric steam engines and they sucked (in a very literal way: they sprayed cold water into a cylinder to condense water vapor to create a (sort of) vacuum that will suck a piston into a cylinder). I don't believe that these steam engines required especially good steel. I think the biggest issue was corrosion due to a contact with water, but there was no need to keep really high pressures.
> Yeah a steam powered water pump would be useful to the Romans
According to Devereaux it wouldn't be useless for the Romans. They didn't pump water from coal mines, and when they pumped water they'd need to move fuel from somewhere else to feed it into a steam engine. It was not an an easy or a cheap task to do, because they had no railroads.
I agree with the sentiment in the sense that technology doesn't scale when you're like that Australian guy making everything from one-use mud and sticks. Some of it does but not a lot of it.
Most materials up to the 1900s were readily available in Roman times, though.
The metal of choice for the first Roman steam engines would be slightly expensive copper and they built highways in cobblestone connecting their whole empire so they wouldn't shy away from some forward-thinking investment given a working demo.
The first application for them wouldn't be pumps, either. It would be trivial to have charcoal factories around the roads to quickly carry priority goods and military with even a rudimentary steam engine.
The cobblestone roads could be adapted to tram use with a few thousand guys equipped with standard width sticks and picks.
A random Roman maybe not but a Roman with connections could do it.
You have to remember this is rediscovering the past in ways that previous cultures only had mythology around. The fact that this paper is basically “Stone Age people aren’t less sophisticated” is a relatively new idea since levi strauss reinvented anthropology in the 1950s and 1960s
Hindu, then Greek then confuscian theologian-philosophers laid the foundations for the idea that their group had left behind simply being “animals” and sought out to distinguish human form (in their specific form) from all other forms of life.
Humans also approach things linearly and it fights intuition that regression is not just possible but the norm.
Ancient Greeks attributed Mycenaean remains to the “Age of Heroes”. They were amazed by the scale and engineering quality of the work and thought it was done by gods and mythical creatures such as Cyclopes. They didn’t approach progress linearly or mono-dimensionally.
Heinrich Schliemann was probably the first to connect the myths with tangible proof through archeology in late 19th century. While Lévi-Strauss work was much later and more political and polemical rather than scientific.
I hope, we never find out how chimps discuss the last paragraph:
... Sometimes, at least in humans, social interactions can also increase our irrationality instead. But chimps don’t seem to have this problem. Engelmann’s team is currently running a study focused on whether the choices chimps make are influenced by the choices of their fellow chimps. “The chimps only followed the other chimp’s decision when the other chimp had better evidence,” Engelmann says. “In this sense, chimps seem to be more rational than humans.”
Different PoV: You have a local bug and ask the digital hive mind for a solution, but someone already solved the issue and their solution was incorporated... LLMs are just very effficient at compressing billions of solutions into a few GB.
Try to ask something no one ever came up with a solution so far.
This argument comes up often but can be easily dismissed. Make up a language and explain it to the LLM like you would to a person. Tell it to only use that language now to communicate. Even earlier AI was really good at this. You will probably move the goal posts and say that this is just pattern recognition, but it still fits nicely within your request for something that no one ever came up with.
I haven't tried in a while but at least previously you could completely flummox Gemini by asking it to come up with some plausible English words with no real known meaning; it just kept giving me rare and funny-sounding actual words and then eventually told me the task is impossible.
This reminds me -- very tenuously -- of how the shorthand for very good performance in the Python community is "like C". In the C community, we know that programs have different performance depending on algorithms chosen..
> In the C community, we know that programs have different performance depending on algorithms chosen..
Yes. Only the C community knows this. What a silly remark.
Regarding the "Python community" remark, benchmarks against C and Fortran go back decades now. It's not just a Python thing. C people push it a lot, too.
Nah, that part is ok. Human wherever you set it, human competence takes decades to really change, and those things have visible changes ever year or so.
The problem with all of the article's metrics is that they are all absolutely bullshit. It just throws claims like that AI can write full programs 50% of the time by itself in there and moves on like if it had any resemblance to what happens on the real world.
I spend a few decades in the industry and in even more teams. I think, the quality of code strongly correlates with the team's ability to articulate its members cognitive load and skills. In some projects it is just not opportune to point out a need to skill up, so everybody just accepts whatever in PRs and quality never gets any better.
On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".
Zero backend. Zero servers. Zero cost.
As climate extremes become more frequent, understanding forecast hazards becomes survival literacy. Zero makes professional ECMWF IFS data accessible without commercial infrastructure — forkable, self-hostable, resilient. Inspired by Cameron Beccario's earth.nullschool.net, which pioneered browser atmospheric visualization.
Happy to discuss implementation details.
Technical highlights:
- No backend - runs entirely client-side - Native O1280 grid (6.6M points) sampled directly in fragment shaders - no regridding to textures - HTTP Range requests fetch ~500KB slices from 4-8MB forecast files on S3 - Works offline after first load (Service Worker caching) - Animated LOD transitions for graticule grid - line density adapts to zoom level
GPU pipeline:
- Binary search in WGSL for irregular Gaussian grid lookup (precomputed LUTs for latitude positions and ring offsets) - Marching squares compute shader for isobar contours - Streamline tracing with Rodrigues rotation for wind flow animation - Fibonacci sphere for uniform seed point distribution (8K-32K wind lines) - Globe rendered via fullscreen triangle (ray-sphere intersection in fragment shader) - Sub-3ms frame times on M1
What didn't work:
- Regridding to textures first - too slow for 6.6M points, quality loss from interpolation - Geometry-based globe mesh - vertex count explosion at high detail - CPU-side contour generation - latency killed interactivity
Storage: Caches weather data locally for offline use. Can grow to several GB with extended exploration. Use the "nuke" option in settings to clear everything.
Data hosted by Open-Meteo via the AWS Open Data Sponsorship Program — bandwidth is free for everyone.
Stack: TypeScript, WebGPU, Mithril, Zod, Immer
Mirror: https://hypatia-earth.github.io/zero
Source: https://github.com/hypatia-earth/zero
reply