Personally I think that the movie Oblivion had one of the best UIs of all time. Every element placed there was animated with such a detail to transporting meaning, and wasn't there just for the show effect as most other Sci-Fi UIs. [1] I wish this UI was reality.
While I think Star Trek transported the idea of using touch screens everywhere, I don't think their UIs are practical, as all of them were static (except for the tablets, which only displayed texts).
The realistic detail I remember from that movie is that the drones had plausible-looking reaction control thrusters (like those from missile interceptors [1]). There's a behind-the-scenes video that mentions the idea and has some (spoiling) shots from the film. [2]
In a scene (spoilers, [3]) where a hovering drone is firing its side-mounted autocannon turrets, the reaction thruster animation doesn't match what's needed to counter the torque from the recoil:
1. The drone begins by firing both port and starboard turrets forward--no torque from recoil here, only backwards force that is countered by the drone's gimballed main thruster.
2. The drone ceases firing its starboard turret to slew it towards a target directly behind it. Of the aft-facing reaction thrusters visible in the shot, only the starboard ones are firing. But this would exacerbate the torque of the still-firing port turret instead of countering it!
3. The starboard turret finishes targeting and begins firing aft, but the reaction thruster animation remains unchanged.
Of course, this level of detail already far exceeds expectations, considering that this all happens in less than a second, and that there's no impact on viewing experience when fast-moving effects don't stand up to frame-by-frame scrubbing.
Imagine this:
Picard goes to the holobridge.
Computer, give me flight controls in the style of the enterprise-D
Computer … … …
Picard: Ah, right at home!
Reminds me of Lt. Barclay, when he was "infected" by the alien probe. He found the existing computer interfaces inefficient so he went to the holodeck to create more efficient ones.
"Tie both consoles into the Enterprise main computer core utilizing neural-scan interface."
"There is no such device on file. "
"No problem, here's how you build it. "
The VR game Star Trek: Bridge Crew played with this idea briefly in the tutorial levels. In hindsight, it seems so obvious. Why not have a bridge suited to exactly what you need, including the ability to change without having to physically move to the rarely used Battle Bridge. If nothing else, everyone having their own ergo setup seems really nice.
Given the frequency with which holograms ran amuck, seems prudent to have as much separation as possible between critical functions and the holodeck.
From a military-readyiness perspective, that jlso seems like that would put you at an enormous disadvantage if all of your controls were dependent upon the (presumably) high power draw of the holodeck. Physical controls that do not disappear in the midst of an already bad situation seems best.
- Film "FUIs" are almost invariably dark these days. I wonder to what extent the designers are influenced by the dark modes in the tools they use.
- Most of them pack enormous amounts of stuff in with tiny fonts. Probably to stop the viewer from getting distracted trying to read stuff. But there seems to be an ambient assumption that 'futuristic' stuff is way more info-packed than today's UIs are - where does that come from?
- Most of them over-use transparency to absurd degrees. For holograms where you want to see the actor's face it makes sense but it appears even in shots where no actors are visible, e.g. https://www.hudsandguis.com/home/2018/florida-hospital-ui
- A typical film FUI has a lot of animated data visualizations, but real UI almost never does. Is this because such visualizations are useless clutter or because film makers have better tools for creating them than actual programmers do?
The difference between "actual" FUI and film FUI is best illustrated by the concept videos produced by engineering/tech firms:
We see way less detail and much more light-mode stuff.
I guess I'm curious how much of the gap we see between real UI and stylized/imagined UI is to do with lack of tooling, different design fashions and the different needs of film UI.
> Is this because such visualizations are useless clutter or because film makers have better tools for creating them than actual programmers do?
Experience suggests mainly the former, a little of the latter. In a film visualisations could be said to fall into "scenery/looks cool/thematic" (which normally corresponds to useless) and "drives the story". For a vis to drive the story it needs a single interpretation - and animation helps with that.
Real world vis rarely has a single interpretation, so attempting to animate like they do in a film would often make the vis less useful (as it would 'hide' valid interpretations, and most vis exists to help explore).
That said, some vis does exist to tell a story (often deliberately hiding the inconvenient truths rather than highlighting useful ones) and benefits from film like tools. These tools do exist, but are (rightfully) focused on creative media rather than programmers. The overlap exists, but is small.
It's probably more apologetics than anything else, but as an EE my first thought is that the pixels are some kind of OLED that is transparent by default, so every pixel that is not lit up saves power. Lighting up a full background on that handheld in the second pic would literally take thousands of times more power (obviously only taking into account the screen, not CPU, etc.).
I think darkmode provides a variety of practical advantages in the VFX context.
* It would seem to be easier to transition a dark mode UI between solid and translucent/transparent/hologram elements. I think the moment you have transparent/holography elements, you're almost immediately sucked into a dark-mode type UI.
* Dark-mode displays are just going to be... darker on average. I imagine it would simplify lighting in a lot of scenarios. Probably also gives you more flexibility when framing/creating reflection shots
* I feel like in film, big expanses of white / light grey carry much stronger connotations than big expanses of black / dark grey. Large expanses of black can be interpreted as absence, and somewhat natural. We are familiar with deep pits of shadow. We are familiar with the night sky. While big expanses of white / light grey is interpreted as sterile or unnatural, or at least "unusual". Think of the first training scenario in the Matrix (the white expanse with tv and two chairs). Think of God's office in Bruce Almighty.
> I feel like in film, big expanses of white / light grey carry much stronger connotations than big expanses of black / dark grey.
This might be a carryover from the days when fully lighting up a movie theater screen would reveal all the imperfections and stains in the canvas, and film makers learned to avoid that.
> Is this because such visualizations are useless clutter or because film makers have better tools for creating them than actual programmers do?
Another reason is that real visualisations are hard. You have to faithfully represent some actual data, and make it ‘usable’ in the sense of well designed keys/labelling etc, and choose a chart type that is appropriate for the data and the point you are making, plus you somehow have to make it pretty and not overwhelming. These constraints all fight each other, you go round and round in circles and eventually you have something passable. It’s a professional field at the intersection of data science and design, it’s not like there is some magic ‘tooling’ that solves it for you.
Fake visualisations are very easy to make pretty - you can just generate data that looks nice, the design doesn’t have to have any real utility or even make sense, etc. It just has to look like a data visualisation.
> … assumption that 'futuristic' stuff is way more info-packed than today's UIs are - where does that come from?
Text adds texture, and it tell also tells the audience that our hyper-aware and very clever protagonist is seeking and scanning the text for some sort of data that us normie-non-future people wouldn’t understand.
At the end of the day, it’s just a creative trope that consistently pleases the audience. It looks cool.
As for the source, you could go all the way back to Terminator or TRON, and as recent(ish) as The Matrix, but I think Ghost in the Shell has been most influential.
On a related note, does anyone have good recommendations for real-world "power UIs"? I.e, computer user interfaces that have a steep learning curve but are extremely powerful once mastered. E.g, vi/emacs or one line bash scripts + *nix CLI programs. AutoCAD/Blender and other 3D modeling tools also have good UIs once you go through the pain of mastering them and learn how to use them with one hand on a keyboard and one hand on a mouse.
> AutoCAD/Blender and other 3D modeling tools also have good UIs once you go through the pain of mastering them
I'd like to add ZBrush and Houdini to this list. (Both of them are 3D modeling tools.)
ZBrush is known for having an extremely unintuitive UI, but once you figure it out it actually all makes perfect sense and generally you wouldn't want it any other way.
Houdini is weird because it's not very difficult for programmers to pick up, but it's very hard for everyone else. That's because it's literally a programming language for creating 3D models. As a Haskell fan I appreciate that the language is lazy, but aside from that it does leave something to be desired. That said, the HN crowd should be able to see why it's very powerful to create 3d models by programming them instead of by hand-manipulating a mesh (like you would in Blender or ZBrush).
Ableton (music creation software) is also extremely powerful but difficult to learn.
Lastly, it's not really a UI, but nix is a build system that's hard to learn but once you do learn it you can do things that would be very difficult in any other build system I know of.
I'm going to quip about Eve Online, lol. Tons of information screens, but also 3rd party tooling.
More serious software, it's probably investor software; you know the type, multiple screens of red / green flashing numbers, charts, and of course the Bloomberg terminal. These people play with millions under a button press.
I think VR/AR/MR have the most new realestate in terms of power-UI because you are physically present within the interface. The best examples to me are VR & Art based [1] which let you create 3D objects in a 3D space or Paint like you would in front of an easle. The UI's range from grabbing a spraycan suspended in mid air to menue controls for spline points and camera views. Still evolving and no where near feature parity with established tools but I see this changing over time.
These aren't "Future User Interfaces". They are UI's to look good on Film, appearing fancy and futuristic. Take a hologram for example, great to have a person and the thing they manipulate in the same shot. Actually working on a hologram, not so great.
Often they will have very little information on a huge surface.. while real user interfaces need to convey as much information on as little surface as possible.
They probably have as little value for a UI designer as a car chase in a film for a regular driver.
The same could be said of touch screens until the first iPod touch.
I wouldn't want diagnostic data, system performance data, audio visualization or anything like that in a holographic display. But an interactive 3d holographic display would be fantastic for mapping information with topographical data, or anything mapping to the real world. The most intuitive environment is the real world which is three dimensional, tactile and responsive, so representing that abstractly can only be done fully granularly and intuitively in a 3 dimensional environment. Trying to do other things in three dimensions just doesn't make a lot of sense and usually you're right, they're selling something unnecessary and counterproductive.
But touch screens are a step back from keyboard / pen / mouse. They have it's use if neither of those are easily available.
I'm not sold on holographic displays at all. We've been displaying 3 dimensional data on 2 dimensional planes ever since after the middle ages. It works very well. Arguably better, because it let's the user (or the one presenting) "collapse" one dimension.
It is also easier to manipulate, you only turn the object. On a holographic display, you have to turn your head and turn the object. Even worse, if multiple persons are looking at it, you don't see the same thing.
They're a step back only in terms of raw information representation and manipulation. They're fantastic for media interaction. If you're dealing with text and images, separating out the display and inputs makes sense. Id say the same thing for interactive environments like video games this holds true as well. If you're dealing with video or music players, it's added complexity.
A map on a paper does have the advantage of simpler interaction and canonical representation regardless of perspective, but it does that by sacrificing granularity. You can look at one map at a time, you can't zoom in, you can't change angles, all you can do is switch to a different map. A map of the solar system for example would benefit from three dimensions, and you can represent more abstract information as well such as orbital paths.
There isn't one UX/UI that is above the rest. They all have their use cases. I think people are trying to cram the wrong things into the wrong presentation/interaction model and it leads to frustration. There's a reason nobody writes code on a smartphone, there's a reason social media exploded when smartphones became commonplace. Nobody wants to use metaverse to search something online, but people want a 3d interactive environment when playing a game. There's a situation for everything, the trick is figuring out what fits best where.
But the interfaces were always an abstraction, your mind had to learn a different way to interact with something regardless. The tactile feedback is somewhat overstated IMO, you know that what you're feeling and what you're doing aren't directly related, unlike manipulating a Rubik's cube or something. It's good to have that, it helps with an interface, but it doesn't have to be the same as a keyboard just like a keyboard doesn't have to be the same as arranging letter blocks on a table. Touching what you're trying to manipulate, such as on a touchscreen, is more intuitive, but that's what makes it less powerful for complex tasks; you want your abstract interface to enable you to manipulate more than just what you're seeing in front of you, a keyboard does that, a touch screen does not.
IMO the more varied the sensory input we receive from an interface the better (higher information throughput, more diverse engagement of the mind, fine motor skills, etc. etc.), and satisfaction with touch taken away and being limited almost exclusively to sight is to some degree Stockholm syndrome.
We gained something, but we also lost something and hopefully we will not settle for that. I am looking forwards to tactile interfaces, whatever new tech will make it possible.
What are the functional differences in display (not input) between an actual 3D display and a 3D object projected onto a 2D display? The only two I can think of are first, being able to use our eyes’ actual depth perception ability instead of the brain imagining it, and second, being able to move your head to get perspective.
What are the differences in input? I can think of being able to touch the display more directly. The world is indeed tactile, and I hope such a display would be, too, instead of touching air. No matter how reactive the display is, touching air would not be the same as touching real objects.
The problem is that all of these “advantages” require physical movement: moving your hands, arms, and head. Doing this for creative work, which takes a lot of time, would be exhausting. There is a reason why touch screens on desktop computers are not common, even though we have the technology. Your arms get tired.
A trackball mouse is a way, way more effective 3D modeling tool than directly manipulating items with your hands. You can work a lot quicker and with a lot less movement.
“ Doing this for creative work, which takes a lot of time, would be exhausting. There is a reason why touch screens on desktop computers are not common, even though we have the technology. Your arms get tired.”
Right, they are props made to look stylish and set the scene. They aren't really useful designs because they don't have to actually work in the real world. They're the equivalent of those scrolling streams of random code or terminal output you see computer hackers in movies working with.
Hmm, maybe I should make a "Future Code Gallery" with snippets of code from movies.
There is actually a functional purpose to futuristic UIs in film which is to help drive the story. For example, in Star Trek if a ship is damaged the UI will go red and start flashing to help convey to the audience that something really bad happened. One of the newer Star Wars movie did something similar to show an entire fleet of ships being wiped out.
In my mind, The Expanse did a fantastic job with their UIs. One of the things that stood out for me was the integration between handheld and stationary devices -- for example, finding some information via personal mobile, then sharing it with others by "swiping" it onto a large stationary screen.
You can see that a lot of thought went into how the interactions would look like -- a lot more than usual, I'd say. Or even.. a lot more than in many actual software products! It feels like they considered it as more than just eye candy. The way information is shared / communication happens in that envisioned world seems very realistic and complete.
As someone who often designs user interfaces, I hold this show as one of the recent high-water marks for How To Do Things Well.
I think futuristic computer UIs in TV and films before computer graphics would make for a fascinating topic. For example, the computer UI in the BBC TV series The Hitchhiker's Guide to the Galaxy deserves a mention.
The low-budget series was broadcast in 1981 - a time when computer graphics were limited and not widespread in TV and film. All the "computer graphics" were hand animated to simulate a computer display. The "computer" graphics still stands up brilliantly more than 40 years later:
There are very few games that use a CLI as the primary user interaction. Duskers (2016) is one such game: you type commands into a terminal to direct dog-sized rovers as they explore derelict spaceships.
It might be unique among CLI games in that you can alias chains of commands together to create a new command (macro). For example there is no "begin" command in the game. `begin=open a1; navigate 1 2 a1; generator 2; status 1;` causes the airlock 1 (always between your ship and the derelict) to open, rovers 1 and 2 to navigate through airlock 1 to whatever room is on the other side, then for rover 2 to try to power on the room while rover 1 gives you the status of the room.
The alias feature means that when the gameplay starts feeling repetitive, you can create a command to do more of that work for you.
The issue with those fictional UIs is that who design them do not design to serve a real purpose, but to impress the public. It's like a car cockpit designed by someone who have nothing more than a limited and distorted idea about how cars are driven to impress someone who do not know how to drive either.
From a techie point point of view those UIs are just garbage: full of irrelevant information, confusing, completely poor of controls and interactive effectiveness. People using them act as ignorant who can't really do nothing but just touching around hoping something will happen in the direction they want to go to.
Even filmmakers should remember a thing: the purpose of a UI is human-device interaction, not human-device-driven-decision-making. I understand they have just to please their public and those who pay them money (witch means also pushing the public toward certain behaviors, commercially and politically nice for some film financier) but if a movie is made ONLY with those principles appeal of the movie will lower and lower, and that's probably why fan of "old" movies grow and grow every days...
A future UI IMVHO? Look back at Xerox UIs like http://augmentingcognition.com/assets/Kay1977.pdf or some experiments like SUNws PostScript Pizza tool. A document interactive UI not for passive content consumption or at maximum content selection like modern web services UIs but a UI for production and consumption at the same level without any perceived difference between the two modes, like I write as I read, I execute as I write etc. Emacs/org-mode is one the most modern UI we have alive, modern Notebook UIs are limited archaic UIs that can offer far less than Emacs but a bit more modern stuff (like proper image/video integration to a certain less bad extent), the rest? Is so advanced to be behind the early history of computer's UI, a bit advanced than the Eniac UI, but given the leap between the two Eniac in theory was more advanced anyway, just featuring a raw UI for tech limits of it's time.
That's not the purpose of a film ui though. Its purpose is a lot closer to that of a prop: it can convey information about the plot (ACCESS DENIED), be part of character development (fluid mastery of a complex tool), convey setting & tone (metal and cracked glass screen implies something different than flowing holograms).
The thing it is not for is an actual person to use as a tool to solve a problem in real space. It's not inherently a worse prop because it can't be used that way. Just as a prop gun that can't fire is a shitty gun, but a better prop for that very reason.
Filmmakers should study real use of UIs so they understand how better to use that prop to convey their meaning to the audience. But they shouldn't pursue verisimilitude for its own sake. A prop gun that the audience can tell wouldn't be able to fire conveys a very different meaning to one that looks ready to go. Filmmakers should be able to ensure they're conveying the information they think they are, but the rest is artistic choice.
It's great that these things are getting more and more realistic and less handwavy. You still see the occasional rogue AI scrolling WordPress source code on a screen but it's always a delight when little details have love and thought put into them. I even know a guy who was a chess consultant on a Hollywood film recently, to ensure they used realistic positions that weren't embarrassing on close inspection (scenes got cut though, sadly).
There was a trend if I remember correctly, over the 2000~2005 period. Back then, there was some futuristic UIs like gkrellm, some audio players like winamp and a few themes for Windows to make it look cooler.
While I think Star Trek transported the idea of using touch screens everywhere, I don't think their UIs are practical, as all of them were static (except for the tablets, which only displayed texts).
[1] https://gmunk.com/OBLIVION-GFX