Well, I had to try just for fun of it. The immediate reply was the following:
[...]
Starting with 0 paired with 1. If I think about functions or some formula that could map these values, but that might be too complicated.
Alternatively, perhaps these numbers represent something specific. For example, maybe they're representing the number of certain features in the number itself, like the number of closed loops in the digit's shape.
Let's think about that:
0 has one closed loop.
5 has no closed loops.
6 has one closed loop.
7 has no closed loops.
8 has two closed loops.
9 has one closed loop.
Wait a minute, that seems to match the sequence:
0 (1 loop) - 1
5 (0 loops) - 0
6 (1 loop) - 1
7 (0 loops) - 0
8 (2 loops) - 2
So, if 9 has one closed loop, then it should be paired with 1.
But let's double-check this theory with the given pairs.
0 has one loop, paired with 1 – matches.
5 has no loops, paired with 0 – matches.
6 has one loop, paired with 1 – matches.
7 has no loops, paired with 0 – matches.
8 has two loops, paired with 2 – matches.
Yes, this seems consistent.
Therefore, 9 should have one closed loop, so it should be paired with 1.
[...]
Then it continued trying another approach related to the number of angles in the digit shape, but it discarded it. Then "Alternatively, perhaps it's related to the number of segments that are 'on' in a seven-segment display." and so on.
Thank you for this. I just installed it an it was exactly what I looked for when I bought my Oculus: the opportunity to explore, zoom and learn about the night sky freely in 3d. I tried many others but this has been the only one to fill my expectations.
Yes. All modern systems run a huge amount of code if we count the underlying microcode, operating system, runtime libraries, 3rd party dependencies and the application itself.
The thing is not the total size but the boundaries and the understanding of each layer and how manageable each layer is for the humans modifying and mantaining it ... which is, of course, the total cognitive complexity.
I struggled initially to understand what this was about. Then, after digging a little bit more, I see that it is an open and rich environment to visualize (while executing) data from different perspectives while being extensible from within the same environment (using Pharo), allowing to create a rich and custom ecosystem for data and systems exploration in an orderly and structured manner.
What I couldn't find is information about workshops or forums besides discord to talk and understand about its possibilities.
Specifically, I had trouble understanding how to apply it to other programming languages and their frameworks. I can see that for some of them (e.g. PHP) you have to export the project with a tool and then import it into GToolkit, but that converts such a system into just an static structure to be analyzed, but not a live system to experiment with. This would mean that live documentation would require permanently exporting and then importing the code from such system. To get a living system + its real time perspectives and embedded live documentation it seems you must program the system in Pharo.
As someone who gets distracted way too much I usually require that a project can demonstrate why it might be interesting at a glance and this one failed pretty hard.
Luckily it was on HN so I could come to the comments to see if it was worth looking into more deeply - but ideally it should be able to do that by itself.
My choices were either 37m video - or a bunch of text that seemed too abstract and didn't explain "what this does" in a concise, down-to-earth manner.
Also mentioning "Smalltalk" would have been a really useful thing. The fact it sounded Smalltalky and I vaguely remember reading about Pharo was my only clue. The minute I understood this was related to Smalltalk a lot of other things fell into place.
No affiliation, just a happy customer. I initially explored the DIY way, but since this one has such good references, I decided to start with it and go from there having a better reference to compare against my DIY attempts.
Sadly, this helicopter won't record any footage, it is only meant to test flight control and to proof that the idea will work for future missions. There is a interesting Veritasium episode (youtube) that talks about this (interviewing the actual designer from the JPL). The only footage will be from Perseverance filming the flight.
This is not correct, there are multiple downward facing cameras [a]:
1) Navigation (NAV) Camera. This is a global-shutter, nadir pointed grayscale 640 by 480 pixel sensor (Omnivision
OV7251) mounted to a Sunny optics module. It has a field-of-view (FOV) of 133 deg (horizontal) by 100 deg (vertical) with an average Instantaneous Field-of-view (IFOV) of 3.6 mRad/pixel, and is capable of acquiring images at 10 frames/sec. Visual features are extracted from the images and tracked from frame to frame to provide a velocity estimate.
2) Return-to-Earth (RTE) Camera. This is a rolling shutter, high-resolution 4208 by 3120 pixel sensor (Sony IMX 214) with a Bayer color filter array mated with an O-film optics module. This camera has a FOV of 47 deg (horizontal) by 47 deg (vertical) with an average IFOV of 0.26 mRad/pixel.
One of the stated goals is to use of the drone to scout interesting places for other drones. In theory I guess it could take that decision without sharing the source images, but that seems a bit far fetched. You'd want to study them in ridiculous detail.
> Its payload is a high resolution downward-looking camera for navigation, landing, and science surveying of the terrain, and a communication system to relay data to the Perseverance rover.
The problem isn't the camera, it's the uplink to Perseverance and all the other parts required for a usable camera. They're using Zigbee [1] to communicate with the Rover at 200 kbps and the solar panel recharging the batteries also have to power heaters to keep the electronics alive - there's no hardware connection between the two for data or power exchange AFAICT. The drone is already so heavy that it can only stay aloft for 90 seconds to a few minutes between charges so between the extra battery, lens, better antenna and RF module, etc. it'd require a redesign of the entire mission.
It's not that heavy, relatively speaking, it has to spin the rotors a lot faster to gain altitude in 1% atmosphere of Earth, hence the shorter flight time.
The Snapdragon CPU has plenty of power for JPEG encoding and likely even hardware accelerated encoders. The 640x480 8bpp navcam images could be entirely usable at a fairly lossy 40:1 compression ratio which ends up about about 8KB per frame, for a 90s flight recording at 10fps that's only about 7.2MB to record the whole flight. It would take a little under 5 minutes to send that back to the rover at 200kbps. The color high resolution camera isn't set up for high frame rate recording IIRC so that was never an option.
High quality 30fps video was never really an option but it's entirely possible/likely to get navcam video after a flight. The Snapdragon is also fast enough to do intraframe compression codec (even h.264) for the navcam video to be able to stream it live back to the rover for relaying back to Earth later like was done with the landing imagery.
The nature of Perseverance relaying through orbiters for high speed uplink to Earth was always going to preclude "live" video from any instrument. The only data important enough for "live" transmission is vehicle telemetry and even then that's only available for the portion of a sol (Martian sidereal day) that Earth is visible from the rover.
One of the advantages is to be able to put the destination as 'append only', reducing the risk in case an attacker gained access to the backed up server and wants to delete the backups. Another one to easily switch to a different backend or copy data between them.
It promises a enhanced vision technology allowing beautiful images even inside the city and also the ability to connect to a mesh of telescopes around the globe participating in scientific research from SETI institute, allowing it to be controlled remotely by the object under investigation.
[...]
Starting with 0 paired with 1. If I think about functions or some formula that could map these values, but that might be too complicated.
Alternatively, perhaps these numbers represent something specific. For example, maybe they're representing the number of certain features in the number itself, like the number of closed loops in the digit's shape.
Let's think about that:
0 has one closed loop. 5 has no closed loops. 6 has one closed loop. 7 has no closed loops. 8 has two closed loops. 9 has one closed loop. Wait a minute, that seems to match the sequence:
0 (1 loop) - 1
5 (0 loops) - 0
6 (1 loop) - 1
7 (0 loops) - 0
8 (2 loops) - 2
So, if 9 has one closed loop, then it should be paired with 1.
But let's double-check this theory with the given pairs.
0 has one loop, paired with 1 – matches.
5 has no loops, paired with 0 – matches.
6 has one loop, paired with 1 – matches.
7 has no loops, paired with 0 – matches.
8 has two loops, paired with 2 – matches.
Yes, this seems consistent.
Therefore, 9 should have one closed loop, so it should be paired with 1.
[...]
Then it continued trying another approach related to the number of angles in the digit shape, but it discarded it. Then "Alternatively, perhaps it's related to the number of segments that are 'on' in a seven-segment display." and so on.