We’re not monitoring phone calls. We’re monitoring the network conditions. This means we can provide an estimate of call quality even when there’s no calls on the network.
I'm struggling to understand why this would be useful, so I guess I'm not the target market - can you say something about who this is useful for, why they want to see this info, and what they can do with it?
It's frustrating for people who experience voice quality issues because the problems are usually intermittent. This means that, by the time they contact their service providers, the problem is not occurring and they just get a lame 'have you tried rebooting your modem/router' response.
VoIP Spear monitors 24x7x365 so customers can provide historical data to their service providers to show the issue occurred.
Also, service providers use VoIP Spear because they want help troubleshooting. They are also frustrated that the issue is no longer occurring by the time the customer calls for support.
1. Personal: Home users use VoIP Spear to monitor their VoIP service to provide information to their service providers about the problems they are experiencing.
2. SME: Businesses that have VoIP use VoIP Spear in the same way as the home users.
3. Service providers: Service providers use VoIP Spear because it is an inexpensive and cheap way to get data about the issues their customers are experiencing. Most VoIP service providers are typically unable to get any data about the quality of phone calls their customers experience.
I'm not sure why people (were) downvoting the parent comment.
Typescript (and normal Javascript) both support async/await very well as long as you use Babel to compile your code.
Infact, you don't even need to use Babel to compile your code because a large portion of browsers natively support async/await: https://caniuse.com/#search=async
I'm not sure why people say you need libraries that support async/await. In JS/TS, async/await are built into the language itself and most libraries utilize the Promise API, which means they also support async/await (since async/await is built on top of promises).
I don't understand this fixation on symbolic reasoning. Do any other animals practice this? If the answer is no, then it is probably not the most important milestone to AGI or at least not the one we should be currently aiming for. Right now we can not replicate the cognition of a mouse. Feels like we want to go to Mars before figuring out how to build a rocket.
Seconded. Even if animals do symbolic reasoning, they do it on top of hardware based on continuous physical dynamics, more similar to DNNs... So why not build on that platform?
I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence... But I don't see where there is evidence that symbolic reasoning is either necessary or sufficient for AGI, except people describing how they think their brain works.
Related, there are a lot of statements that symbolic or rule based systems do better / as well as / almost as well as neural methods. Citation please, I'd love a map of which ML problems are still best solved with symbolic systems. (Sincerely - it's not that I expect there aren't any.)
> I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence...
Good point, we wouldn't have AlphaZero now if we only relied on biological inspiration. Nature hardly ever performs Monte Carlo Tree Search (though I'm not sure this is entirely true, see slime mold searching for food: https://thumbs.gfycat.com/IdealisticThirdCalf-size_restricte...).
The thing is, whatever the hell it is that human brains actually do in the background to produce our 'understanding' of the world and our ability to synthesize new ways to manipulate it, we're also very good at back-fitting explanations based on symbolic reasoning. So it looks like machines need symbolic reasoning to replicate human abilities, whereas I'd bet a dollar that actually, we're doing something quite different (and messy and Bayesian and statistical) in the background and then, using the same process, coming up with a story to explain our outcome semantically. It's not insight so much as parallel construction.
I fully agree, as I wrote in my other comment in here. Logical symbolic reasoning is usually post-hoc rationalisation built constructively to come to an already held conclusion that "feels right". It's rare that someone changes their mind due to logic, especially if the topic isn't abstract and has real-world consequences and emotional engagement.
> usually post hoc rationalisation built constructively to come to an already held conclusion that "feels right"
Counterfactual reasoning is a promising direction for AI. What would have happened if the situation were slightly different? That means we have a 'world model' in our head and can try our ideas out 'in simulation' before applying them in reality. That's why a human driver doesn't need to crash 1000 times before learning to drive, unlike RL agents. This post hoc rationalisation is our way of grounding intuition to logical models of the world, it's model based RL.
I think the the fixation on symbolic reasoning comes from ignorance at how hard classification is vs how hard pure mechanical symbolic operations are for humans. It's easy to make the mistake of thinking that since a computer can rapidly multiply two numbers together (hard for humans) that they were operating at a higher level than human brains.
Turns out this is wrong. Human brains are very efficient.
Subsymbolic systems, such as ANN are clearly good at some things and symbolic systems are better at others.
It is argued that symbolic reasoning is required for what we might call higher levels of intelligence (lets assume this is correct).
Symbolic systems have struggled in the realms of grounding a symbol to something in the physical world, because its messy and complex, i.e. the area where subsymbolic systems play best.
If we assume that ANN are approximately akin to natural brains, then can we take that they are examples of a subsymbolic system able to, with the correct architecture, produce (perhaps the wrong word) a symbolic resoning system?
Perhaps this emergence ontop of the subsymbolic processing is what humans (and others to varying degrees) possess. Perhaps in the past (GOFAI) suffered because it was going top down, or not even going down to subsymbolic to ground the symbols.
Perhaps ANN struggles because its not going up to symbolic reasoning.
Then also perhaps ANN (or organic brains), which evolved where reaction/perception give the critical survival advantage, then only much later did symbolic become possible and beneficial, however wit hardware that wasnt necessarily developed for that in most efficient way.
Being of the belief that ANN are sufficient for AGI (for 20+ years), and possibly offer an elegant solution, I currently think that they are at this time, not the most efficient (nor plausible with the current compute/hardware, or for many years (probably my lifetime)). Practical progress imho is likely in hybridisation of ANN and Logic (however I'm not referring to hand baked rules), and even propose a mixed hardware might even supersede a pure ANN or what evolution has provided in the brain.
100% agree. I am terrible at mental arithmetic, but I am exceedingly good at performing symbolic operations playing bullet chess. It's primarily a visual or geometric calculation, not purely abstract like math.
I think most people don't realize that our brains have this ability. But all you need to do is spend a few months learning chess and you'll see for yourself.