Hacker Newsnew | past | comments | ask | show | jobs | submit | justcallmejm's commentslogin

Aloe | Vancouver, BC, Canada (ONSITE preference)

Aloe is the critical thinking layer for agents – to know why they believe what they do, how confident they are, and what actions they can take to increase that confidence. It’s the only path to AI we can trust, delegate to, and rely on.

Aloe exists to build tools to help human cognition succeed in an information-dense world it did not evolve to handle. We believe this is the most important problem to solve today.

Open Roles (https://aloe.inc/company) - AI Researcher - UX interaction designer - UX user experience researcher and designer - Software eng - backend - Software eng - AI - Software eng - front end


Aloe | Vancouver, BC

Aloe is an epistemological intelligence lab building machines we can trust.

Hiring across AI research, UX, and software engineering roles.

https://aloe.inc/blog/the-best-talent-in-the-world


What you’re saying ascribes more to the definition of intelligence than the author’s. “Memory that modifies behavior” doesn’t have any stated purpose whereas you’re suggesting that its purpose must be to reduce uncertainty. The latter sounds closer to a definition of knowledge - i.e. experience compressed into something with predictive power.


“panoply of grifters and chancers and financial engineers” — accurate

Yet, let’s look at intelligence for a sec… it’s been evolving for quite some time and isn’t stopping at humans. Humans are building intelligence at a rate far faster than biological evolution. It is almost inevitable (pending humans wiping ourselves out by any number of catastrophic failures of governance) that we will build intelligence that supersedes our own. Yeah?

I’m a co-founder of Aloe (https://aloe.inc) - a generalist AI that recently became state of the art on the GAIA benchmark. As I was hand-checking the output of our test to ensure Aloe had done the task, not just found answers in some leak online, I had a real come-to-Jesus moment when it hit that this agent is already a better problem-solver than most of the adults I’ve worked with in my career. And this is the floor of its capability.

It is a humbling moment to be human. The few humans at the helms of companies developing these technologies will inevitably reshape the trajectory of humanity.


Aloe | Vancouver BC Canada | ONSITE

To the Best Talent in the World: an Invitation Come join a state-of-the-art team tackling the most important problems in AI – in a society that actually wants you to be here.

https://aloe.inc/blog/the-best-talent-in-the-world

I am a co-founder. Looking forward to doing the best work of our lives together!


This is the problem Aloe (a state-of-the-art agent designed and built by a cognitive scientist) is solving. This article out yesterday delves into exactly that: https://puck.news/is-aloe-the-first-self-building-ai/?sharer...


This is why a neurosymbolic system is necessary, which Aloe (https://aloe.inc) recently demonstrated exceeds performance of frontier models, using a model agnostic approach.


No - humans are the counter example.

If you want a model that doesn't hallucinate then train it to predict the truth, and give it a way to test it's predictions. For humans/animals the truth is the real world.

An LLM is trained to predict individual training sample continuations (a billion conflicting mini truths, not a single grounded one), whether those are excerpts from WikiPedia, or bathroom stall musings recalled on 4chan. Based on all this the LLM builds a predictive model which it is then not allowed to test at runtime.

So, yeah, we should stop doing that.


Consumers don’t need to want it; VCs just need to imagine making money off it.


The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.

I'd argue it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach articulated in the essay.

Intelligence must be built from a first principles theory of what intelligence actually is.


CPS sounds interesting but your link goes to a teaser trailer and a waiting list. It's kind of hard to expect much from that.


I'd argue that it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach you've written about.

Intelligence must be built from a first principles theory of what intelligence actually is.

The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: