Hacker Newsnew | past | comments | ask | show | jobs | submit | bigyikes's commentslogin

What does Blender do differently that makes it such a successful open source product?

It’s powerful and pleasant to use. Even the release marketing page is beautiful and well-made.

I like open source as much as the next guy, but outside of developer tools there is little that comes close to Blender in terms of utility and UX.

Is it funding? Specific individuals? Are there PMs and designers? Whatever it is, it’s working!


I think the dog fooding their own software through all the amazing movies the Blender Institute made plays a huge role.

Relying on individual donations from users helps a lot with blender being aligned to the interest of its actual users. There is not one or a few corporate sponsors controlling everything.

Plus the GPL license which protects the freedom of its users.


I think at the very least they've found those rare individuals who can both code well and manage well. There are countless open source project leaders who might be the most knowledgeable person in the world on that area but have no idea how to communicate and collaborate in an open-source setting, especially as the project gets popular and attracts people who're detrimental (not necessarily malicious) to the project's growth.


Yeah this is a really good question. I'd love to know more about that too. There's some good history on the website here [1].

The creator apparently was selling it as freemium software in 1998, and then the bubble burst and the corp shutdown in 2002. But the creator created a non-profit called the Blender Foundation, launched a Free Blender campaign [2] (the forum post is still up!) to raise money from its users and bought out the rights to the software from the investors.

[1] https://www.blender.org/about/history/

[2] https://blenderartists.org/t/free-blender-campaign-launched/...


One biggest thing that I cite when I bring up Blender as probably the best open source software is that it has stellar UI/UX.

NO other (yes I’d die on the hill) open source software has good UX and it’s horrible for adoption by the larger public.


Ignoring open source ideas to be different and instead following industry standard options.

GIMP vs Krita is a similar story. GIMP never will be the Photoshop replacement they aimed to be


I’ve interrogated people about this but can never get a straight answer.

——

“So you can really see things in your head when your eyes are closed?”

Yeah!

“And it’s as though you’re seeing the object in front of you?”

Yeah, you don’t have that?

“So it’s like you’re really seeing it? It’s the sensation of sight?“

Well… it’s kind of different. I’m not really seeing it.

——

…and around we go.

Personally, I can see images when I dream, but I don’t see anything at all if I’m conscious and closing my eyes. I can recite the qualities of an object, and this generates impressions of the object in my head, but it’s not really seeing. It’s vibe seeing.


For me it is like a different "space" for mental vs real images. It is not the same neurons, I would guess.

The real images are (and feel) outside of myself (obviously, you may say). The mental image feels very close and kind of "inside my mental space", in a dark space. It is far from how I see with my eyes on all levels, very basic. It is more conceptual, that concept given some vague form, not "pixels" (not that the eye is like a camera sensor either, it is much more complicated, a lot of pre-processing taking place right in the retina, which developed from a piece of brain in very early embryonic development). The better I know the object the better this internal concept-image, but far from what looking at the real thing is like.

I am able to visualize, that's why I could write this, but I think my ability to do so is near the bottom. It is vague without details unless I concentrate on them specifically, and it is very dark in there.

On https://en.wikipedia.org/wiki/Aphantasia I am between apple #3 and #4 in that picture. When I read novels I develop barely any internal imagery, only barebones conceptual ones. Sometimes I look at fancy visually stunning movies, Youtube videos, or graphics sites on the web specifically to "download" some better images into my brain. Mostly for fantastical landscapes and architecture.

The Lord of the Rings movies, for example, completely replaced all internal mental images I may have had, even though I read the books long before those movies were made. People like me need graphically talented people around, or my mental images will be very much limited to drastically reduced versions of what I see in real life. (THANK YOU to all graphical artists).


It's the same for me, in terms of it being dark and fuzzy unless concentrated on.

but I really do notice this sort of ability when it comes to memory. When I am looking for something, I can often visualize a scene of where I saw it last. This is not always helpful for actually finding the object, but it can be! When trying to recall a meeting, I can recall materials I saw (bits of text on slides, images, etc).

I'm fairly good at remembering faces, and if they're next to a name when I see them, I can even associate the name! The flip side, of course, is that if I don't see the name, I won't remember it.


I find it implausible that people really have extreme, detailed imagery. Not that they can't do it on demand, if desired. But if every time they imagined something, it instantly appeared with all possible detail - that's just tremendously inefficient.

I think of it as more like Level of Detail in a 3d visualization. So when you ask people how much detail they imagine, their response strategy might determine most of the variance. (Some think you mean "what is the ultimate limit of your viz", and others think you mean "what detail is in a no-purpose-given, speeded-response viz".


It can be highly variable. For example in the morning or right after a nap I can visualize in extreme detail, but when I'm awake and at my most alert it will become a lot more basic.


What about people who can look at something and then draw it? https://en.wikipedia.org/wiki/Stephen_Wiltshire Do they have to recall specific areas, or do they perceive the entire thing as a fully instantiated mental image.


Glad that you used this exact example! This guy doesn’t have a photorealistic memory. At least it’s far from as good as it’s claimed to be. He’s an artist proficient in a particular style - better than most, but not superhuman. When he’s not drawing from a direct reference, he’s simply making up details based on assumptions, not on photorealistic memory. Here’s a good example: https://www.youtube.com/watch?v=FyPqQIHkasI

He looks at a city and then draws a picture of it. It’s very detailed, so we assume he remembered all of it and recreated it accurately. But if you compare any part of it it to the actual photo of the city he saw, you’ll see that he only recreated it roughly — some landmarks, the general shape of the coastline. He probably got the number of bridges right.

But you couldn’t use this as a map. If you were trying to find a particular building that isn’t among the top 15 most memorable ones, it’s probably not in his drawing, with a completely random building taking its place instead. Every part of that drawing is filled with mistakes and assumptions that would never be made by someone who could actually see the landscape in their mind like a photo.

And it’s the same with every other claim of photorealistic memory - it’s always some kind of trick where people have a decent but realistic level of memory. And then they fill the gaps with tons of generated detail that we either can’t check, or wouldn't bother to check.


This is called building your 'catalogue' in art, especially concept art. In order to draw something (well) from imagination, you should draw it from reference many times. Then when you draw from imagination, your brain will pull from what it knows. And since you studied the subjects, the textures, the shapes, etc, so well, you will have that stored away and will be able to do so.


Yeah, it resembles what you'd get when using gpt 4o for image editing. Of the parts that should have been unaltered, the broad lines are correct, but the exact details are made up. A modern white chair is replaced by some other white chair. A book is replaced by some other book. Etc, etc.

Both brains and gpt appear to be doing lossy compression based on preexisting world knowledge.


but what we see in the first place is not what's out there. a lot of it is generated by the brain. (same for what we hear)


Not exactly. I can imagine (hehe) that robust imagination is useful for practical thinking. It allows to reason about the world without having to interact with it by simply simulating complex scenarios in your head.

It's like, if you want to make weather forecast, then you'll use as detailed models as possible, right?


Very very well put. I couldn’t describe my same state as you have. Makes perfect sense for me. Thank you.


I'd describe it as like having a second monitor in your desktop. It's not inherently "over" what I already see or anywhere physical, it's like in a different space. Sometimes it can feel like it's "behind" what I am seeing indeed (i.e. kind of over), but it can vary and I suspect that's just a learned position (I just tried and I can shift the position images 'feel where they are').

I don't see with full fidelity, I suspect that's to save power or limitations of my neural circuitry. But I can definitely see red and see shapes. Yes, it's not exactly like seeing with your eyes and if you pay attention you can sense there's trickery involved (particularly with motion being very low fidelity, kind of low FPS), but it's still definitely an image. It's not that it's a blurred image exactly, more that it only generates some details I am particularly focused at. It can't generate a huge quantity of details for an entire scene in 4K, it's more like it generates a scene in 320p and some minor patches can appear at high res, and often the borders are fuzzy. I can imagine this with my eyes open or closed, but it's easier with eyes closed.

It feels (and probably is?) that it's the same system used for my dreams, but in my dreams it's more like "setup" to simulate my own vision, and the fidelity is increased somewhat.


I have three different ways that vision seems to work with me.

1. Actually seeing something like in a dream.

2. A mental scratch pad I can draw on and use spatial awareness to navigate. (I see the code of applications as flying over a landscape or walking through a forest.)

3. Imagination, which uses whatever data vision gets turned into.

I'm not sure how common 2 is. A lot of my brain has broken parts and this scratchpad is used in place of logic. This works fine until I need to work on linear list of similar tokens and keep them in order, like math and some functional programming languages.


Here is some context: Early in the aphantasia discourse, someone asked a group I was in to do a mental exercise: Imagine an apple. Can you tell what color it is? What variety? Can you tell the lighting? Is it against a background? Does it have a texture? Imagine cutting into it. And so on.

For me, not only was the color, variety, lighting, and texture crystal clear, but I noticed that when I mentally "cut into" the apple, I could see where the pigment from the broken skin cells had been smeared by the action of the knife into the fleshy white interior of the apple. This happened "by itself", I didn't have to try to make it happen. It was at a level of crisp detail that would be difficult to see with the naked eye without holding it very close.

That was the first time I had paid attention to the exact level of detail that appears in my mental imagery, and it hadn't occurred to me before that it might be unusual. Based on what other people describe of their experience, it seems pretty clear to me that there is real variation in mental imagery, and people are not just "describing the same thing differently".


I can _remember_ the properties of an apple - approximate size, weight (my hand does not instantly drop to the floor due to its weight), etc.

I can't _imagine_ an apple in my hand if you defined the colour, size or weight (for example, purple, 50cm diameter and 100Kg).

In my mind I am recalling a _memory_ of holding an apple in my hand - not imagining the one according to your specifications.

One example I can give is being tasked with rearranging desks in an office. I can't for the life of me _imagine_ what the desks would look like ahead of physically moving them into place.

I can make an educated guess based on their length/width but certainly not "picture" how they would look arranged without physically moving them.

It's like my brain BSODs when computing the image!

The same applies to people - I can only recall a memory of someone - not imagine them sitting on a bench in front of me. I might remember a memory of the person on _a_ bench but certainly not the one in front of me.


Can I ask you a personal question? How do you imagine sex? I thought that everyone kinda thought about themselves doing it with someone else, a bit like a porn movie that you make in your own mind.

I can't imagine it being at all interesting to just think about it the way you are talking about it, like it would just be a sort of description of what the other person looks like, without the multifaceted sensations. Touch, smell, visuals.

And if you can't imagine it, how do you go about ever doing anything about getting it? It's like saying you want a juicy burger without imagining yourself eating it. Like a paper description of an experience, rather than a simulation of it. It doesn't seem motivating enough that you'd bother washing yourself, getting nice clothes, and going to chat with women.


I have so many questions to ask people with aphantasia related to sex, but it would get uncomfortably personal, so maybe best not to.

The best I can do: do people with aphantasia only get aroused if the stimulus is present? Can't they not get horny just imagining things, like I imagine most people can?

Does steamy literature do anything for them? I imagine it doesn't, since if you cannot imagine things then words on a page just have no power.

In my opinion, the fact erotic literature exists is proof aphantasia is not normal. Words cannot be arousing if you cannot imagine things "in your mind's eye".


Good erotic literature does not only describe images, but also desires, emotions and sensations, all of which I think have different channels of imagination/recall.


I didn't mean it describes images, I meant it elicits them. If you cannot imagine what's happening, you cannot get aroused. Words are just words, they must conjure an image.

Aphantasiacs often cannot imagine sensations either (at least, my friend doesn't. He cannot imagine the smell of coffee either).


> In my opinion, the fact erotic literature exists is proof aphantasia is not normal. Words cannot be arousing if you cannot imagine things "in your mind's eye".

The opposite seems to follow? erotic literature is proof you don't need images to be aroused.


Hmm, no? The words must elicit images and sensations, otherwise they wouldn't work as erotica. Words are just words. If you cannot picture what they are describing, you cannot get aroused.


> If you cannot picture what they are describing, you cannot get aroused.

This is your thesis. In the first place, the existence of erotic literature doesn't prove this is true, like you claimed. I would furthermore claim that it calls this assumption into question. If the goal was imagery, the more straightforward approach would be to draw an image. If that wasn't possible, you would instead describe the image you wanted to draw in words in great detail. But this isn't at all what most erotica consists of.


> In the first place, the existence of erotic literature doesn't prove this is true, like you claimed

Everything we are discussing in this comments section must be understood in an informal way. I obviously did not "prove" anything; I don't think anything can be proven about this anyway. Whenever I say "proof", read my statements as "[in my opinion] this is strong evidence that [thing]".

It's a figure of speech: "this cannot be so!", "it must be like this other thing", etc. It's informal conversation.

> If the goal was imagery, the more straightforward approach would be to draw an image.

Maybe straightforward, but as with anything related to the phenomenon of closure (as in Scott McCloud's closure), drawing an image closes doors. If you describe but don't draw an image, the reader is free to conjure their own image. Maybe they visualize a more attractive person than the artist would have drawn, or simply the kind of person they would be more attracted to.

Have you never seen a movie adaptation after reading the book and thought "wait, this wasn't how I imagined this character"?

> If that wasn't possible, you would instead describe the image you wanted to draw in words in great detail. But this isn't at all what most erotica consists of.

That's such a mechanistic description! Words don't work like this. Sometimes describing less is better, because the human brain fills in the gaps. You don't simply list physical attributes in an analytical way, you instead conjure sensory stimulus for the reader.

(If talking about sex and adjacent activities makes anybody nervous, simply replace this with literature about food. In order to make somebody's mouth water you cannot simply list ingredients; you must evoke imagery and taste. Then again, some people -- aphantasiacs -- simply cannot "taste" the food in textual descriptions!).


> Whenever I say "proof", read my statements as "[in my opinion] this is strong evidence that [thing]".

read my statement as "it isn't any evidence at all"


Well, that's easy: your statement is wrong.


For me visualization by itself is mostly useless, it is more of a concept of something arousing happening and vague visual flashes of something similar I have seen. It somewhat works, but nowhere near as effective as real pictures.

What works for me - is imagining sensations, they could enhance both real and vague pictures, and I feel them directly in the body which makes them very effective.


> I can't _imagine_ an apple in my hand if you defined the colour, size or weight (for example, purple, 50cm diameter and 100Kg).

I think most people couldn't imagine holding an apple specced like a washing machine in one hand. :-)


That'd be a tiny washing machine, to be fair. That said, a 50cm diameter apple would weigh maybe half that, unless it's made entirely of water ice.


but are those details fabricated on demand?

I don't have any trouble following your path of increased detail, but if someone says "imagine an apple", I get a vaguely apple-shaped, generally redish object (I like cosmic crisp), which only becomes detailed if I "navigate my mental eye" closer.


I think that is pretty normal while dreaming, daydreaming, or awake if you don't have aphantasia. Someone skilled in neural-linguistic programming can guide someone into developing greater and greater details.

Psychedelics and certain meditative practices can enhance this effect. There are also specific practices that allow imagined object to take a life of its own.

That's in the private imaginative mindspace. There are other mindspaces. There was one particular dream where I can tell, it was procedurally generated on-demand. When I deliberately took an unusual turn, the entire realm stuttered as whole new areas got procedurally generated. There were other spaces where it was not like that.


When you image slicing, does the video your head renders smooth or jittery string of pictures strung together?


For me the default is typically an instant view of whatever is described, first an apple, then when I read "sliced" now it's suddenly in slices. But if I want to image motion I can easily do that also, like of a knife cutting down through an apple and the two halves falling to either side, just like a video but with a generic background and other simplifications, like the knife suddenly disappears when the cut is complete.


It's like hearing a song in your head, you can listen to it and maybe keep time roughly but if someone asks you what instruments there are you might not be able to get all of them, or might not remember the drums or the baseline. It's all much more vague. If you asked me to remember my childhood home I can visualise 'all of it' in my head, but maybe not what the type of bricks are like, or where all of the windows were.


This actually highlights to me what may be different about mental images for other people. Because I can much more clearly hear music in my head than I can see images in my head. So if it's much more vague for others, that must be kind of what images are like for me.


For me images are clear and easy, sound is limited and more difficult.


Not quite. I have had a lot of musical training and have a very good musical memory. I can write down songs from my head or hear a song and write it down later, depending on how complicated it is, usually with only 1-2 listens, or play it back, etc. I can visualize things in my head but it is a lot more abstract, or rather, harder to explain.


I think the person you're replying to didn't describe it exactly. It's not really about how good your memory is, I think. It's that no matter what, "replaying" the song in your head isn't going to bring about the same reaction as actually physically hearing music. It's like a simulation, a higher-order perception, thinking of yourself hearing it rather than willing yourself to really hear it in the same way as usual.


It might be easier to describe as an eye that is only opened manually, and can only focus on highly specific things. This is my superpower - I can see things vividly in my mind, spin them around, zoom in/out, and more.

When I'm looking at it, the only thing I can see is whatever object is being imagined. However, yes - it's similar to the sensation of seeing with your own actual eyes. The reason it seems so foreign is because our real eyes can see more than one thing at a time. Our mind's eye can only see exactly one subject at a time (though I should mention that when I navigate cities, I do so by imagining a birds-eye view, so there are many objects IN the map, but I cannot see anything other than the map, and it becomes extremely blurry outside of the section I'm focusing on).


For me, it's a little more like you describe these days. It is images, but fuzzier and more impressionistic than it used to be. I have to concentrate harder to have a full-on image of a scene, and can't so much when multitasking.

In college, especially when I was studying Japanese and had to memorize a lot of shapes, I could look at a poster filled with characters and recall it hours later to translate those characters. Your mind is a muscle and it gets better with exercise, and grows weaker when lazy.


I am the same and I am not convinced people can really - see - things. Like, when I close my eyes, I see the inside of my eye lids, the blackness. When I then try to imagine a candle for example there is no candle appearing in the darkness, I just remember how a candle is shaped its parts and similar characteristics. I see nothing.


> I just remember how a candle is shaped its parts and similar characteristics

If you do not somehow "see" the shape of the candle, how do you remember its physical characteristics? Is it like a list of physical properties in abstract form? An irregular cylinder of diameter X, longer than it's diameter, etc?

I can see, in front of me, a lit candle if I wish it. I cannot claim it's picture-perfect, but I can see it; and most people can, too. I can see its yellow flame flickering. I can see drops of wax along the candle. I can see the yellow light it casts.


Not the parent, but I relate to their experience.

It depends on what you mean by “see”.

It’s nothing like seeing with my eyes, and it’s nothing like dreaming.

When I “see” it is abstract. There are impressions and sensations. I can recall the qualities of something - even the visual qualities - but it doesn’t feel like sight.

Can you remember what something smells like? I can recall a foul smell, but I don’t recoil because it doesn’t actually feel like smelling. Still, I have an impression of the smell. Sight works the same for me.


> it’s nothing like dreaming.

That's interesting. When I close my eyes and imagine "seeing" things, I would actually describe it as pretty much exactly like the sensation I have when I "see" stuff in dreams. To me, this similarity is especially clear when I wake up in the middle of a dream, then close my eyes while awake — I can continue where I left off, and it "looks" exactly the same as in the dream.

But I agree that it doesn't feel like "sight", as in the physical act of seeing with your eyes.


I think I am aphantasic or mostly so. I don't see visualizations but have vague echoes of their derived properties like spatial structures. It is almost like proprioception if I were some amorphous being that could spread out my countless limbs to feel the shape of the scene.

But, I do have vivid, sometimes lucid, dreams. I would say they are exactly like seeing and being in terms of qualia. It feels like my eyes, and I can blink, cover my face, etc. It's like a nearly ideal, first-person VR experience.

They are unlike reality in that I can be aware it is a dream and have a kind of detachment about it. And the details can be unstable or break down as the dream progresses.

Common visual problems are that I cannot read or operate computers. I try, but the symbolic content shifts and blurs and will not remain coherent.

Motor problems include that I lose my balance or my legs stop working or gravity stops working and I start dragging myself along by my arms or swimming through the air, trying to continue the story.

If I've been playing video games recently, I can even have a weird second-order experience like I am fumbling to find the keyboard and mouse controls to pilot myself through the dream! That is a particularly weird feeling when I become aware of it.

I feel like I have recurring dreams in the same fictional places, but they can have unreal aspects that lead me to get lost. Not like MC Escher drawings, but doorways and junctions that seem to be unreliable or spaces that don't make sense like the Tardis.


> Can you remember what something smells like? I can recall a foul smell, but I don’t recoil because it doesn’t actually feel like smelling. Still, I have an impression of the smell. Sight works the same for me.

Can't get a foul smell reaction mentally, but if I visualize eating a bag of salt & vinegar potato chips and recall the taste I'll get extra saliva production. Not with most other foods so I think it's more mouth preparing to dilute the acid than just straight pavlov saliva before feeding reaction.


yes I think you come close to describing how I imagine things. Seeing is just fundamentally the wrong word, at least in my case. When I for example imagine a road I rode on with my bike the other day and do this with my eyes open, there is nothing popping up in front of my eyes, mixed with what i actually see atm, it's more like abstractions popping up in the back of my head. Very simple drawings maybe, just the contours of how it really looks.


Perhaps it is a mental process you can train and get better at. I understand the 'back of the head', location for imagination. And now - for me - it's at the front with some specific training. Drawing (and specific techniques within) have been the cause of the biggest shifts to 'where/how' my imagination is.


What about memory? Do you occasionally have vivid memories of sight, sound or smell?


Can you describe what you mean by "seeing"? To me, imagination isn't like actual sight. The best way I can describe it is that it's a kind of meta-perception, I'm envisioning the thought, the impression of something. I can visualize the exact details and properties of the candle, but it's not like I'm actually seeing it, I'm just thinking of seeing it. The way you describe your imagination is that it's as if the candle is superimposed on your actual vision, like putting on a mixed-reality headset that's drawing in stuff in your real field of view, representing the same kind of sight as "real sight". Is that what that's like for you?


It's like a photograph is an indirection of the thing that was photographed: not the real thing, but a good visual approximation.

It's like watching a movie; the people are not there, but you still see them.

The cinema is in my mind. People here describe it as "thinking of seeing", but to me that's nonsense. It's definitely a visual thing, I bet it's activating some of the same regions in the brain. Seeing is thinking anyway, in the sense the brain is interpreting signals from the optic nerve.

It's never an hallucination in the sense of being confused about what's real and what's not.

I can also anticipate the taste of something I like, feel it in my mouth, and start salivating. Is it tasting or "thinking of tasting"?


It's more like it's in a different plane, you can see it but it's from another source, like how I can hear things but it doesn't effect my site. If I imagine a candle I "see" a candle in front of a black background, with a flickering flame and a bit of wax dripping down the side. Like how you can have a song in your head but still listen to people


I remember the shape of a candle perfectly well, I just can't "see" anything.

It's not a list of abstract properties, it's an understanding of the shape of a candle. Why would you need to be able to see it to remember its shape?


Because the shape is a physical thing, it's perceived by your senses.

I meant remember, not understand. You can understand something, but I specifically mean remember.


I can prove I can remember the shape, because I can draw it.

I think you're putting too much importance on the ability to visualize it. I can have a high-resolution image of a candle, but it's not useful for understanding that there's a candle in the picture - for that, you need to have parsed the image and understood what it contains. The visualization is just the source material. Similarly, when you read a book, you're not remembering what entire pages look like with all the words on them.

The problem with these kinds of things is that so much happens unconsciously that we're not aware of. You think remembering the image is important because you're unaware of all the processing that allows you to understand the image.


> I think you're putting too much importance on the ability to visualize it.

Almost all artists will tell you the ability to visualize is critical to be a good artist...


Back when I was on some medication to help me sleep, it came with the side effect of having vivid dreams... and if I didn't fall asleep fast enough after taking it, I'd get hallucinations while my eyes were closed. I knew I wasn't seeing what I thought I was seeing, but I wasn't really in control of the imagery. In one case, I thought there was a suit of armor standing over me and mumbling. In another, I was laying in bed, but I was seeing the living room from a few feet outside of my bedroom.

My - and what I presume is "normal" - mental imagery isn't any different than those hallucinations, with the exception of I am willing what I imagine, and therefore control what I "see" in my mind. The colors, contours, lighting, shading, and so on are all like what you would see with your eyes, though the actual level of detail is less.


I'm also the same, but I do believe others can vividly see creations in their mind's eye. Nikola Tesla was one who could tinker in his imagination.

Of course I wish I could do the same. On the other hand, like a blind person with other heightened senses, I have strengths in thought that surpass what seeing concretely may obscure. Most of my thoughts and reasoning is more like following graphs of related bits of vaguely visual information, it's far more topologically structural than bound to 3D physicality.


I'm convinced I probably have aphantasia.. maybe even quite extreme. On a scale of 1-10 probably 1 or 2 vividness.

But if I take shrooms.... I can actually see objects with my eyes closed. I can rotate them. Morph them. It's so fun! Huge bummer that I miss out on stuff like this in my daily life.

What's weird is that I can still "rotate objects" and correctly predict their final state when I am sober (up to a point, of course). But I am blind to the actual visual. It's hard to explain. It's just not registering in my consciousness - but perhaps it's there behind the curtain.

So, the mind is undoubtedly capable of performing this feat. However, my brain in sober state is not wired to transfer information in this way.


Exactly same here. Can operation on the data, without the visuals.


> I am the same and I am not convinced people can really - see - things

My experience of seeing images in my mind is significantly different than when I am not seeing images, and also different from just remembering the details of an object like an apple vs visualizing it.

Regarding closing your eyes: I don't typically close my eyes when I create mental imagery, I'm turning it off and on right now as I type this, now there's an apple I can see in my mind, now there is nothing but the generic slightly darkish background that the apple was sitting in front of. Now the apple is there again but it's green not red, etc.


Some people can see images while they are conscious just like you see them in your dreams. Perhaps even better, depending on their ability to visualize. Maybe you just never developed the conscious ability to visualize.


I can visualize things in a lucid dream, and it's identical to seeing for me. But I can only control it for a short time before I wake up.

When awake, I have a "mind's eye," but it's more like what you're describing. As I fall asleep, I can actually begin to see things. I wonder if some people can do that when awake.


Can you remember seeing? I use my imagination to get a very grainy image but it's usually my interpretation of it and what I'm using it for.

Like when in school I'd imagine graphs lines before drawn or best example is a cad test and from reading the directions I could get an idea of what I was about to draw in cad

Man made computers in our image, it use to be a job title.


> Personally, I can see images when I dream, but I don’t see anything at all if I’m conscious and closing my eyes.

That's classic complete aphantasia. I have it too.

The "kind of different. I’m not really seeing it" would apply just as well to dream images. If you're interrogating people, you might try asking them whether it's similar to that.


When people tell me they can see things in their mind I usually ask something like:

"imagine a ball, can you see it?"

"yes"

"ok what color is it? "

I never heard anyone say anything other than a variation of "hm I don't know". It's just an anecdote but still


What's funny is, I have complete aphantasia, but I can imagine a ball, I just can't see it. If you ask me what color it is, I would say white, because I imagined a baseball. But I can't see it, I'm just thinking about it.


When you read this do you hear it in your head?


I wouldn't say "hear", but I do have an inner monologue. When I read, I have an experience of the words in my mind. But similarly, when I look at the world, I have an experience of what I'm looking at, while I'm looking.

The difference comes when I close my eyes vs. block my ears. When I close my eyes, I don't see images, I can't voluntarily make images appear. But with my eyes and ears blocked, I can still think words - my inner monologue - which I experience in much the same way as I do when I'm reading. I can't conjure other sounds though, which is why I don't really consider that equivalent to "hearing" - it's not sound, it's the concept of words. I don't have any analogue of that for images.

Ordinary aphantasia doesn't imply anything about lack of inner monologue. Some people apparently do lack an inner monologue, and if they're also aphantasic, that's been described by some authors as "deep aphantasia". But there's no evidence that the two conditions are related, except in a kind of conceptual sense.


> "ok what color is it? "

As I was reading your post and imagining, when I got to the color question it was a plastic spotted ball, white background with various colored spots. As I continued reading I switched to a red rubber ball.


“Yes — I can imagine it. A simple sphere, maybe sitting in a soft pool of light.”

“I’m picturing it as a bright red ball, glossy and catching a bit of light on one side.”

Great, huh? Except that’s what ChatGPT said when I asked it those two questions. It certainly isn’t picturing anything. If a robot which only ‘thinks’ in terms of chain-of-thought of abstract tokens can act as if it truly sees things, what makes you think this test has any validity at all?


Not everything is about AI, I don't give a shit about what chatgpt thinks


> Personally, I can see images when I dream.

If I dream I don't ever remember them - I assume I must, I think everyone (barring medical issues) has REM sleep.

I envy people that, dreams sound amazing.


I went from frequent lucid dreams as a child and teen, to no (remembered) dreams, back to vivid (but very rarely lucid) dreams. Ask while having aphantasia, I wish I could get even approximately close to dream images while awake.


Have you tried a dream journal? We forget most of our dreams because we might have them at 2 am and wake up at 7 am. If you wake yourself up in the middle of the night one or two times, you're more likely to have been in the middle of a dream, and it's still up there in your brain enough to write down. The more you do this, the easier it becomes.


Personally I strongly do not want to get better at remembering dreams. At the moment I very rarely remember anything about dreaming, and on the very rare occasion that some fragment of memory from a dream pops into my head it is super confusing until I identify "oh, that must have been from a dream". I prefer to keep my memory uncontaminated with random garbage :)


I remember my dreams quite well. Years ago, I did a dream journal to up that even further. At the time, I discussed doing so with a friend, and she expressed a similar sentiment to yours. In our discussion, she explained not wanting to "carry emotional baggage" from a dream into her day, being distracted by it, and so forth.

That phrasing of "carrying emotional baggage" stuck with me, because together we realized that people can relate to their dreams very differently. If she remembers a dream, she remembers the feelings and feels them all over again. I regard dreams as junk data, and can't imagine "feeling" anything about one longer than a few moments after I wake.


As someone with very poor natural dream recall, I think you're right. One time I kept a dream journal and got really good at dream recall.

It was just hours and hours of random junk every night.

I threw away the journal and realized forgetting dreams is good.


In my experience remembering dreams is a matter of practice and stress levels. When life is calmer I remember alot more.


Not for me, never remembered them at any point, I asked my mum once if she remembered me dreaming when I was a kid and she couldn't remember it either, no dreams/no nightmares.

I have an active imagination and I read a lot of fiction and I don't think I have aphantasia, I just go to sleep, wake up and never remember a thing in between.


Pretend you're talking about photos and cameras. You mean you can see the image? even though the camera isn't pointed at it now? Like it's really seeing it?

Same idea. You're seeing it, but you know it's just a memory of the thing, not a live view. Like pulling up a video or jpg instead of a live feed.


Let’s suppose you have perfect recall.

Pull up the image on your phone and look at it. Now close your eyes and imagine the image as accurately as you can.

Is it as though you didn’t close your eyes at all? Do you see it the same way as when your eyes are open?


No.

When I'm fully awake, the mental images are more like someone attached a new camera with a field of view that ends at the edges of the object/scene I try to generate.


Okay, forget everything outside that field of view in your real vision.

If you could crop your real field of view somehow to just the photo in question, then would it be as though nothing changed?

(Like, I get that things outside the phone image would change, but does the image your imagining change? Does the sensation change?)


The details get better or near photorealistic when I'm about to doze off.

When I wide awake, parts of the image are "gone" when I'm not focusing them.

Also, the sensation of seeing in my mind does feel different. It's like there is some different place where that image is showing up.

Even if I imagine the mental image to overlay with my real vision, it feels like it's "added" somewhere between my conscious mind and the outside/real world.


I’ve got a hollow log from an apple tree in front of my parked car. I know the contractor put a bucket upside down on it, I could walk out my front door with my eyes closed and kick it (I know exactly where it is) But is the bucket at an angle to the left or right? I don’t have a picture I can reference. I know that I don’t know because I’d have to have noticed and remembered.

Does your photograph allow you to faithfully recall details you didn’t notice at the time or is it a simulation of an image?


It might be helpful for intuiting the structure of a program. Imagine if you had to read code all on a single line, with newlines represented with \n.

I can get the feel of a piece of code just by looking at it. Even if you blurred the image, just the shape of the lines of code conveys a lot of information.


True, but LLMs are already really good at that kind of thing. Even back in 2015, before transformers, here's a karpathy blog post showing how you could find specific neurons that tracked things like indent position, approx column location, long quotes, etc.

https://karpathy.github.io/2015/05/21/rnn-effectiveness/

That said, I do think algorithms and system designs are very visual. It's way harder to explain heaps and merge sorts and such from just text and code. Granted, it's 2025 now and modern LLMs seem to have internalized those types of concepts ~perfectly for a while now, so IDK if there's much to gain by changing approaches at that level anymore.


Another example might be the way people used to show off their Wordle scores on Twitter when the game first came out. Just posting the gray, green and yellow squares by themselves, sans text, communicates a surprising amount of information about the player's guesses.


Owning the user base seems like a huge strategic advantage.


I love my USB-C iPhone but Lightning was smaller and easier to plug in.


From my experience using various (work provided) devices in outdoors agriculture use, I consider the lightning connector/port less prone to failure as well. If something was to break (from torque), it seems like the tab on the cable should snap or the cable just pull out before catastrophic damage to the port can occur.

Though I still had to replace cables because the cable itself developed a break somewhere, even with one that had proper stress relief at the ends.

Meanwhile most of the USB C ports on my Lenovo laptop from 2022 are barely working because somewhere along the line either the soldering broke or the port got too loose. Possibly from too much torque but I’m not sure. So the cable has to be at just the right angle. I’ve also done some android phone battery/screen replacement for friends, and had to do a few USB-C ports when it was possible due to the same sort of thing.

However all that is pretty much moot now, thanks to wireless charging and magnetic attachment docks. As such the only time I connect a cable anymore is monthly for cleaning out photos and other data. Previously I’d be connecting cables several times a day to charge in between fields as the battery went to shit. Honestly the “MagSafe” concept is the only change I’ve seen to smart phones in the past decade that I actually really like.


Lightning had small pins inside the port that could be caught by debris and pulled out of alignment (or in worst cases, broken off altogether). USB-C has no moving parts on the device side. Apple was reportedly behind that design since Lightning was nearing release when design for USB-C started (and Apple is/was a member of USBIF)


> Lightning had small pins inside the port that could be caught by debris and pulled out of alignment (or in worst cases, broken off altogether).

Lightning has 1.5mm of height in the slot, debris has to be pretty large to get stuck and usually it's enough to just blow some compressed air into the slot to get dirt to release.

In contrast, USB-C has only 0.7mm between the tab and the respective "other" side, so debris can get trapped much much more easily, and the tab is often very flimsy, in addition to virtually everyone sans Apple not supporting the connector housing properly with the main device housing.


Does anyone have reliability data for USB-C ports? It seems to me like Lightning is more robust to repeated plug/unplug cycles. But this is only on my limited sample size of one laptop with a failed USB-C port and some vague hand waving.


It shouldn't be, my understanding is that the springy bits (the most likely wear part) in Lightning are in the port, whereas in USB-C they're intentionally in the cable so you can replace it. I'm surprised you have a failed USB port, but I've never experienced one fortunately.

I see Lightning as fragile on both sides of the connection, since the port has springy bits that can wear, and the cables also die, either due to the DRM chips Apple involves in the mix for profit reasons, or due to the pins becoming damaged (perhaps this? https://ioshacker.com/iphone/why-the-fourth-pin-on-your-ligh... ).


USB-C has an unsupported tab in the middle of the port. It's pretty easy for that tab to bend or break, especially if the plug is inserted at an angle.

Lightning doesn't have that failure mode. Also Lightning ports only use 8 pins (except on the early iPad Pros), so reversing the cable can often overcome issues with corroded contacts. That workaround isn't possible with USB-C.


I've never seen a device with a broken tab. One thing people seem to misunderstand grossly to keep regurgitating these claims is that there are thousands of USB-C ports from different manufacturers and price points. The Lightning connector is strictly quality controlled by Apple. The USB-C in your juul isn't the same as the one in a high-end device.

The tab in the USB-C port makes the port more durable since it moves the sensitive springy parts to the cable(s) which are easily replaced.

Quality control matters, Apple is arguably quite good at it. USB-C is more wild-west so if you're prone to buying cheap crap you'll be worse off.


Reversing works around some broken conditions for usb-c, power and usb 2.0 data are on both sides. Depending on how bad the corrosion is, reversing may help.

Usb 3 might be trickier, but then iPhone lightning doesn't have that anyway.


Baseline USB 3 is also single sided. Only some of the extra fast modes use both sides.


The springy bits never wear out anyway. I've never once seen an iphone that couldn't grip the cable unless the port was full of pocket lint. Main problem I see is USB-C has both a cable and port which are hard to clean.


The springy bits get torqued weirdly by debris and can be bent out of alignment and/or into contact with each other. It’s rare, but it happens. And the whole port needs to be replaced, which usually means the whole device.


The white plastic toothpick found on most Swiss Army knives is perfect for cleaning USB-C ports.


The Lightning port itself might be more reliable, problem is Apple Lightning cables always break, and all third-party ones (even MFi) are prone to randomly not working after a while. I'd be perfectly fine with Lightning if it were an open spec, instead it singlehandedly created the meme of iPhones always being on 1% battery.


Tape a wire to the trackpad and hold the wire?


My primary email domain is a .me. Never have problems with web forms. It can be more difficult to communicate verbally, so when saying it to non-technical folks I preface it with “my email is a bit weird” and then spell it out slowly. Young people seem to get it more easily.

I also have an email on a .email domain, I have seen it occasionally get rejected by web forms. .party would likely have similar issues.


> It can be more difficult to communicate verbally

Together with a catch all email user part, it's a fun way to find out which companies offer employee discount. "My email is foo@my-domain", "oh, are you a Foo employee?"


More often than not folks look at me gone out rather than asking if I’m an employee.


Every lucid dream I have becomes a nightmare. When I suddenly gain consciousness in a dream I begin to panic and the atmosphere turns sinister.

The last time this happened it turned into some kind of sleep paralysis where I became aware of my physical body but was unable to move as I crossfaded between dream and reality.


Mine have never turned into nightmares, but once I become aware that I'm dreaming and try to take control, the dream seems to fall apart and I wake up.

I've had the sleep paralysis and crossfade that you describe. But it's never psychologically unpleasant.

I've also had lucid dreams where it seems like I get stuck in a time loop and keep dreaming that I'm waking up. It feels like hours have elapsed and I've even gotten bored.


This used to happen to me as well.

This might sound weird but what works for me is once I realize I’m lucid and the dream starts falling apart as you describe it - I quickly start spinning my (dream) body counter clockwise. In most cases this stops the awakening and I can continue lucid dreaming.

Waking up in the ”time loop” is also recurring to me, but a reality check often gets me back on track even when I’m pretty certain that I’m awake (I’m not). I usually just look at my hand. If my fingers look spooky, I’m still sleeping and can induce lucid again.


Personally every time I lucid dream I wake up a few seconds later. As soon as I realize I'm conscious, I directly remember the existence of my physical self, the feeling of my arms, my legs in my bed which directly wakes me up


Interestingly enough I've had the opposite experience. If I'm having a nightmare, usually at some point I realize it's a dream, and from there I can almost always force myself to wake up immediately. It rarely happens for me in a regular dream but when it does I can start to control the scenario to some degree.


I would suggest doing a sleep test.

While I believe this can just happen to some people, in my case it was a result of sleep apnea. Getting diagnosed for it and taking remedial steps has been a life changer for me.


When the Apple Watches start monitoring for it, you’re going to see sleep apnea diagnosis skyrocket.


Now that it's what you expect to happen, it probably makes it more likely to happen. I wonder if you could train yourself to expect a better outcome.

It can be difficult to control a lucid dream, so it may take some work. Most of my dreams have always lucid, but I didn't know people tried to control them until I was an adult. One of the first times I tried to control one, I tried to teleport to a beach, but instead a matte backdrop of a beach popped up, like I was in a 70's TV show.


I've had both lucid dreams (which was enjoyable) and sleep paralysis before. The paralysis was not a fun experience at all, and sounds a lot like what you describe.

It's apparently common enough that there's folklore around it: https://en.wikipedia.org/wiki/Night_hag

I've only had the sleep paralysis a couple times thankfully, and anecdotally the last time I had woken up in the middle of the night beforehand, remembered something I needed to do on my computer, took care of it in a dark room real quick, then went back to sleep. I suspect the sudden bright light and a bit of stress probably contributed to it happening.


I we have our own folklore about it too. I believe a good percentage of alien abduction experiences are in fact attributable to sleep paralysis phenomena. Alien abductions are as real to us as night hags we're to our predecessors.


I can think of benign uses for lock picks and guns. What is the benign use of a secret exploit?


One example I can think of is the WoW private server Warmane uses an RCE to extend client functionality.

https://www.reddit.com/r/wowservers/comments/1eebxwf/warning...


You've never needed to get root access on an old computer when nobody knows the password?


it doesnt have to be secret. for example unlocking old phones. There are certainly people waiting for the right exploits to get access to their old wallet.


The Waymo-Tesla duality is so fun.

Tesla FSD already works everywhere, even on unpaved roads. It just doesn’t work as well as Waymo.

Waymo works very well, just not in as many places as Tesla.

You might bet on Waymo because they have a fully working product already, but I’m betting on Tesla because of the vast amount of training data they are collecting. There’s a bitter lesson here.


> the vast amount of training data they are collecting

They keep pushing this point. And they do appear to be collecting an absolute firehose of data from the millions of vehicles they have on the road. By comparison, Waymo collects a lot less data from many fewer vehicles.

Which leads to some tough questions about Tesla's tech. If they have (conservatively) 10x the training data that Waymo has, why can't their product perform as well as Waymo? Do they need 100x? 1,000x? 10,000x?

Assuming they were at parity with Waymo today, this would suggest that their AI is only at best 10% as effective as Waymo's, and possibly more like 1% or 0.1% or whatever. But since they can't achieve parity, it's not even possible to bound it.

It's entirely possible that their current stack cannot solve the problem of autonomous driving any more than the expert systems of the 60s could do speech translation.

I haven't heard a compelling argument as to why a system that is at best 10% as effective would ever be expected to be the leader.


Data isn’t as useful here as in other domains, since when you change the car’s behavior even a tiny bit, a lot of the timeseries is invalidated. It’s not evergreen, and it can be quite subjective what it means to “pass” a scenario that one previously failed.

Also, Tesla collects data from its fleet, but that data’s fidelity is likely quite limited compared to other companies, because of bandwidth if nothing else. Waymo can easily store every lidar point cloud of every frame of driving.


FSD and Waymo are completely different products. FSD isn't even autonomous, as the user manual reminds you:

    Always remember that Full Self-Driving (Supervised) (also known as Autosteer on City Streets) does not make Model Y autonomous and requires a fully attentive driver who is ready to take immediate action at all times.


> It just doesn’t work as well as Waymo.

I hope for the best for Tesla, but they are many years behind Waymo. The world definitely needs a second working self-driving system! Right now comparing Tesla and Waymo is nonsensical. Once you can sit in the backseat of a Tesla while it drives there might be some worthy comparisons to be made.


My definition of "works" includes the fact that a self-driving car will never drive into a parked fire truck, or many other things i've seen tesla FSD do.


I’m betting on waymo because they use lidar


Well and because they actually have real self driving cars without a safety driver. Tesla doesn't have that and only has demoed it in very specific scenarios.


It's not that black-and-white.

And those demos are VERY old at this point.

I own a Tesla, though I don't own FSD, but this year, Tesla has given all cars a trial of FSD on two occasions. It works remarkably well. I backed out of my driveway, then enabled FSD and it drove all the way across Portland to a friend's place with zero intervention. It was about a 15 mile, 30 minute drive.

It navigated neighborhood roads without markings and tons of cars parked on the curb. It got onto the freeway and navigated, including changing lanes to overtake slow traffic. Once I got to their place, I was able to tell it to automatically parallel park on the curb.

As far as I'm concerned, Tesla has fulfilled their promise of full self driving. The "supervised" requirement is basically just being used as a legal loophole to avoid liability if it fails.


> The "supervised" requirement is basically just being used as a legal loophole to avoid liability if it fails.

"If it fails" - so it is supervised for a reason then. It makes sense because FSD has an intervention rate in the low double digits according to community trackers like https://teslafsdtracker.com.


I think I'd only consider the promise met if they take the liability


Waymo works in 4 cities as a fully autonomous vehicle.

Tesla works nowhere as a fully autonomous vehicle.


I don't understand the "bitter lesson" reference here. The bitter lesson is that general methods of computation are more effective. How is one of the two not using general methods?


My understanding is that Waymo is applying specialized centimeter-scale mapping and lidar to achieve superior results.

In contrast, Tesla is using dumb cameras and just dumping boatloads of data into their model. It’s a more general solution. Maybe the reference doesn’t fit perfectly - the model architecture is likely similar under the hood - but there’s some analogy there.


I think you need to update your understanding.

Just saying they have better results because of mapping and lidar is incredibly reductive. They have an extremely sophisticated AI/ML stack and simulators.

Start here: https://www.youtube.com/watch?v=s_wGhKBjH_U&t=2135s


Data quality is important, too.


One challenge for Tesla and Waymo has been the piecemeal permitting process. Even though California gave Waymo a statewide permit they have still needed to work through various cities/counties for permits. I imagine one goal of Musk's is to make that all go away sometime next year. I'm not making a comment on whether I agree that is a good idea. Just speculating.


Do you think it’s time for a specific federal regulation?


If training data is such an edge for Tesla, how is it that Waymo works so much better than FSD with only 1/1000th the data?

I also don't see any evidence that Waymo can't work anywhere. They recently expanded to Austin, and it seems that it immediately drives better than FSD.


Tesla FSD will never catch up to Waymo until they switch to LIDAR and have human assistance when the vehicle gets into complicated scenarios such as emergency vehicles blocking the road and redirecting traffic.


I'm betting on Tesla not for the technology, but because President Quid Pro Bro is probably going to issue an Executive Order that turns every Tesla company into a federally blessed monopoly.


> Tesla FSD already works everywhere, even on unpaved roads. It just doesn’t work as well as Waymo.

I would not bet on Tesla's FSD other than on highways. Same as many of the Tesla FSD owners I know.


I think I dislike it more on the highway than in the city, and I really dislike it around traffic lights. The highway lane change decisions are awful.

Maybe the end-to-end NN version is better, though. I haven't been able to try it (hw3).


Indeed, it was not very long ago that FSD would happily take a straight line through a roundabout, lanes or "skirts" be damned, essentially treating it as a standard intersection.


Training data is trivial to collect. Betting on Tesla because they have more training data than Waymo is like betting on Roscosmos because they have more employees than SpaceX.


But doesn't Waymo have the better hardware/sensors?


I’m not sure about computing hardware, but Waymo absolutely has better sensors, yes.

But it isn’t obvious to me that better sensors outperform better data.


Isn't Waymo operating an autonomous taxi service for 100k trips per week an object example of outperformance?

Shipping working product should count!


> 100k trips per week

Now 150k trips per week (things are moving fast)

https://x.com/Waymo/status/1851365483972538407


It definitely counts! I also didn’t realize the trip count was so high, that is very impressive.

The diversity of geography may be critical, though. You can only drive the Embarcadero so many times before your loss bottoms out.


It looks similar to a cable/fiber rollout where they have to onboard each region individually. I know they are currently doing this in the Atlanta metro.

Like cable/fiber, once they have good models of the business and what it costs to roll out, they have the freedom to accelerate and do regions in parallel. If the business works, I would expect them to scale the pace of rollout.


This is the root of the misunderstanding, I think. You’re begging the question.

More data does not necessarily mean better data. You can collect many more individual driver experiences, but if they do not have sufficient resolution in the necessary dimensions, they may never provide “better data.” Similarly, even if the magic data is hidden somewhere in there, if the model cannot practically extract the insight because of their sizes/disorganization vs the computational/storage capacity, this too would mean they are not better data.

Of course you can make the argument that some of the sensors are unnecessary, but when one fleet has had millions of vehicles for years and isn’t working, and one started with dozens, has recently grown to one thousand vehicles, and is working, the evidence is not in support of the argument.


Waymo has much better data than Tesla. It is just the coverage of that data that is different.


Waymo was able to do this with less miles. How much data does Tesla really need at this point? Assume you have all the data in the world that you could ever possibly want. How much of that can you really compress into a car for real FSD?


Tesla was supposed to have what they needed when they released the Model 3. Then they had to upgrade the cameras and CPU which meant they had to re train. Then they re-wrote, so again retrain. Now it's new cameras and compute again. Cycle repeats.


How over-fitted are their models to the cameras? I'd expect a layered architecture where a sensor layer does object-recognition and classification and then hands over this representation of the world to a higher-level planning model. You should have to retrain the whole stack for camera revisions - hell that's how it would work across car models with their different camera angles.


> I’m betting on Tesla

I'll take that bet...

I predict the Chinese, in a decade, will have the first FSD

Tesla is miles behind


>I predict the Chinese, in a decade, will have the first FSD

I'll take that bet any day. China and innovation don't go hand in hand.


Didi's testing self driving taxis but before I take the bet and tell you that we should decide on the goal posts.


Uhm, we already have FSDs, both the USA and China, just not the Tesla FSD. China is running auto taxis in a few limited areas with full setups that rely on LIDAR, and I hear they are pretty good.


so tesla works everywhere except where it doesn't work?


No, the duality is:

Waymo works any time, except where it doesn’t.

Tesla works any where, except when it doesn’t.


There are some "where" issues with Tesla too. I have an intersection where it consistently can't tell that its view is obstructed. It'll just yolo into the intersection then pause (after pulling out into the lane) when it realizes that it wasn't actually able to see. Its consistent behavior, and seems to be a flaw with obstruction detection.

I might argue that every traffic light is sort of a where too. Mystery meat yellow light handling is scarily bad.


Waymo vs Tesla definitely smells like bitter lesson to me yes, 100%. With Waymo being on the bitter side, to be clear. Future will tell if the intuition is right on this one


FWIW, Waymo has more cameras than a Tesla. Both companies are removing sensors over time. In some ways removing sensors is easier to prove out with real-life data than adding them. I think it is going to be fascinating to see how it plays out.


Tesla added back the radar and improved the cameras in HW4. My guess is that ultimately they'll converge to a similarly capable sensor/compute suite with Tesla improving theirs and Waymo paring down.


Well, one thing is that many of us rarely take taxis. (Aside from reserved private cars to the airport now and then.) I'm unconvinced that self-driving changes the equation enough for most of us. I do have a trip coming up that 50% cheaper Uber might lead me to not rent a car but that's rare.


Outside the US the situation is quite different. But I don't see Waymo testing outside the US, not even trying, so there.


Car ownership is pretty high in a lot of places. It's still pretty much central urban--lower ownership, harder to re-find a parking spot--vs. everywhere else. Taxis may be more common in some places but there are still a lot of privately owned cars and taking taxis or having drivers is really not the norm in most places.


Nevertheless the comment I was answering to was about renting a car instead of taking a cab or the public transport.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: