Hacker Newsnew | past | comments | ask | show | jobs | submit | Subtextofficial's commentslogin

I've resolved the 5σ Hubble tension by accounting for temporal evolution of spacetime compression. Both Planck (H₀=67.4) and SH0ES (H₀=73.0) measurements are correct for their compression states. Compression-corrected H₀ = 70.7 km/s/Mpc. Empirically validated with 6K lines of Python analyzing Hubble Legacy Field. 3.3σ significance. No new physics required—just proper GR application. Full papers, complete software (MIT licensed), and video demonstration included. Seeking peer review and independent verification.


I’ve spent the last 66 days building an end-to-end AGI OS designed to operate across multiple modalities and devices. It’s not a chatbot wrapper — it’s a full personal OS with:

Emotional intelligence (vision + audio + context)

Multi-LLM orchestration layer

EEG/BCI support (Muse 2 / Muse S)

Avatar embodiment (VRM, XR, AR)

Cosmic Vision (astronomy, satellite/ISS tracking)

Mesh networking (off-grid communication)

Navigation + social proximity

Finance, language learning, wellbeing engines

Autonomous feature generation

The Android version is complete and entering debugging. iOS build is ready, but I need a Mac + Apple Developer account to ship it — presale funds will directly cover that.

If you’re interested in the architecture, the feature stack, or want early access, here’s the Founders presale:

https://discussions.gumroad.com/l/bubs


Hi HN,

I’ve been building something I’ve personally needed for a long time: a mobile AI companion that isn’t tied to one model or one provider. Most assistants lock you into a single LLM and a single way of reasoning. I wanted something more flexible, more composable, and more portable.

So I built Bubs, a cross-platform (Android + iOS) AI companion with multi-LLM orchestration and a local-first automation engine.

Core idea

Bubs can talk to five LLMs (more coming.):

Anthropic Claude

OpenAI GPT-4

Google Gemini

Cohere Command

Local/on-device models

It uses a routing engine that selects the best model for a task based on quality, speed, or cost. It also supports ensemble mode, which queries multiple providers and aggregates the results for higher-confidence reasoning. If a provider is down, it falls back automatically.

Chat + automation

Beyond a chat UI, Bubs includes a natural language workflow automation system (similar in spirit to Zapier, but embedded locally):

Create flows with natural language

Edit visually

Run triggers & actions

Connectors for Slack, GitHub, Google, Notion, Email, Webhooks

Execution history

All on-device, no backend required

The automations are intended to let users build personal “micro-agents” without needing cloud infrastructure.

Local-first architecture

Bubs stores API keys securely (Keychain / EncryptedStore) and doesn’t require a backend to function. Your prompts and automations stay on your device unless you explicitly send them to a provider. This also eliminates ongoing hosting costs.

Tech stack

Android: Jetpack Compose, MVVM, Room, DataStore, Hilt iOS: SwiftUI, MVVM, Keychain, CloudKit-ready architecture

State of the project

I’m finishing debugging the Android build now and polishing the iOS build next. Once both are stable, I’m planning a small early-access presale for people who want to test it while I continue refining the automation engine and routing logic.

Why I’m posting

I’d love feedback from the HN community on:

Multi-model orchestration strategies

Ideas for additional connectors or triggers

Local privacy concerns or improvements

Whether others are working on similar architectures

UI/UX considerations for flow creation on mobile

Suggestions for open-source components Bubs could integrate with

This is a solo build, and I’m trying to make the architecture as clean and extensible as possible. Happy to answer technical questions.

Thanks for taking a look. Bubs is always with you.

– J


I built a standalone, offline-first command center for Meshtastic mesh networks that runs entirely inside a single HTML file. There’s no backend, no installation, and no internet connection required. It works on laptops, tablets, phones, and some smartwatches using only native browser APIs.

Key Features

One self-contained HTML file (51KB)

Works fully offline (PWA)

Connects via Bluetooth, WiFi, or USB Serial

Real-time map of all mesh nodes

Metrics: RSSI, SNR, hop count, routing details

Message console + logs

No frameworks, no build tools, no cloud services

Why I built it Existing tools rely on mobile apps or desktop programs that depend on OS permissions, cloud APIs, or network access. For emergency communications, off-grid operations, research teams, and field deployments, I wanted a universal interface that would work anywhere, on any device, under any conditions.

Looking for feedback on:

Hardware compatibility (especially T-Watch S3, RAK, Heltec)

Browser behavior across different platforms

Missing features you’d like to see

Ideas for v2 and beyond

This is still early, and feedback is very welcome. Thanks for taking a look.

— Jordan Townsend


Thanks! Under what conditions has this been tested so far?

Just feels like the right context to share before people with scarce attention (like everyone has scarce and precious attention) start investing some of it in your efforts

Again, I really appreciate the work and the share, just feels like it could use some clear context


> feedback is very welcome

The readme is obviously AI written, and is clearly incomplete. This leaves me wondering how accurate the rest of the readme is, and how much of the code is vibe coded slop. I know some people use AI to write docs for reasonable reasons, perhaps English isn't your primary language. But the readme smells of AI and a lack of attention to detail that feels worrying enough to me that I won't be using this, at least unless it gains traction amongst people I know and trust.

Key examples:

"[Insert your download link here]"

"License

Choose the license appropriate for your repository:

Apache 2.0

MIT License

MPL 2.0

Ask if you’d like these generated for you."


So sick! Congrats!


Essentially the title. I need testing on a scale I can't do on my own currently. Its a work in progress but I've been been utiliZing it with my limited capacities. Let's you connect any API account or ai service into an AGI precursor that builds itself towards each level of AGI determined by deep mind. It also has a full feature list and I'm happily open to open sourcing this for the betterment of humanity. It acts as n types of users would. If anyone has recommendations on implementing a proper setup for tracking, developing, and releasing updates with community support please reach out. Current cb rank: 8. (For my metrics).


Video of it working with youtube: https://youtube.com/shorts/KC36Q7hFkIg?si=VhNyKGfQS0i8dfoA

I have trouble reading facial cues, and that means missing important emotional context as well. Built this chrome extension that works on youtube, twitch, google meet, and others you allow on your site list. The git has it's mit license so it'll be free forever.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: