"Qiskit capabilities show 24 percent increase in accuracy"
what was it before? What good is a computer that is not 100% accurate? Do I have to run a function 1000x to get some average 99% chance the output is correct?
(Right now "computers that aren't 100% accurate" are all the rage, even without quantum computing. Though a lot of people are wondering if that's any good, too.)
They're especially good for oracle-type problems, where you can verify an answer much faster than you can find them. NP problems are an especially prominent example of that. If it's wrong, you try again.
In theory it might take a very long time to find the answer. But even if you've only got 25% accuracy, the odds of you being wrong 10 times in a row are only 6%. Being wrong 100 times in a row is a number so small it requires scientific notation (10^-13). It's worth it to be able to solve an otherwise exponential problem.
Quantum computers have error bounds, and you can use that to tune your error rate to being-hit-by-a-cosmic-ray level of acceptability.
It's still far from clear that they can build general-purpose quantum computers big enough to do anything useful. But the built-in error factors are not, in themselves, a bar.
One of my colleagues read a paper about quantum computing techniques to solve complex optimization problems (the domain of complex mixed integer solvers) and tried it out for a financial portfolio optimization, replicating the examples provided by one of the quantum computing companies during a trial period.
The computer *did not* produce the same results each time, and often the results were wrong. The service provider's support staff didn't help -- their response was effectively "oh shucks."
We discontinued considering quantum computing after that. Not suitable for our use-case.
Maybe quantum computing would be applicable if you were trying to crack encryption, wherein getting the right result once is helpful regardless of how many wrong answers you get in the process.
Many classical information processing devices are less than 100% reliable. Wifi (or old school dialup) will drop a non-trivial number of packets. RAM chips have some non-zero amount of unreliability, but in most cases we don't notice [1]. Computer processors in space will similarly fail due to cosmic ray bombardment. In all cases, you mitigate such problems by adding redundancy or error correction.
Quantum computer hardware is similarly very error-prone, and it is unlikely that we will ever build quantum hardware which will have ignorable levels of error. However, people have developed many techniques, often much more sophisticated that in the classical domain, for handling the fragility of quantum hardware. I am not familiar with the details of recent improvements in qiskit, but they are referring to improvements in specific "error mitigation" techniques implemented within qiskit. These techniques will be used in tandem with others methods like error correction to create quantum computers that give you answers with close to but less than 100% chance of success.
As you say, in these cases, you will repeat your simulation a few times and take a majority vote.