Hacker Newsnew | past | comments | ask | show | jobs | submit | blackskad's commentslogin

No, it just reverses the dynamics: instead of starting out low to get people into the door, you start with a list price that's too high. If the bids you're getting are always far below your asking price, you know the free market isn't willing to cough up what you want, and you lower the price to something that is more in range. If your initial asking price is $1MM and all bids are around $800K, consider dropping the list price to $850k and see if they want to play now.

That's the usual dynamic in Belgium.


So why is this better, exactly? Seems like it just takes longer to accomplish the same goal (finding FMV). Buyers are still not guaranteed their offer will be accepted unless they gratuitously overpay (by offering list price).


You're assuming all the sellers are always overpricing by a very large percentage and that nobody accepts a lower offer.

What usually happens though, is that you determine an estimated FMV (eFMV) based on recently sold properties in the neighborhood that are similar in size. Experienced realtors are usually quite good at this. You can add a factor to the eFMV to get your list price. As a seller, in the worst case, you may have to drop your list price to your eFMV. Best case, you get a nice bonus. When there are multiple bidders around your eFMV, but under listing, you have some leverage to get a higher bid. Let's assume your eFMV is 850K. "Look, I have a bid of 850k. You're at 825k. My list price is 900K, but if you bid 875k now, it's yours guaranteed." It looks like a steal in the buyers eyes ("25k below list price!") and you get a nice 25k bonus over the eFMV.

As a buyer, this silent "list price is always accepted" rule, gives you the ability to properly filter properties because the list price functions as a cap. This saves both buyer and seller time because you're not chasing unreachable properties. You can also get a realtor to get your own estimated FMV for the property that looks interesting. It's up to you to decide if the certainty of the buy is worth the difference between list price and your eFMV. If not, you can always bid something lower, closer to your eFMV but with the possibility that someone outbids you.

The process usually is really fast, because realtors are good at estimating a FMV, most sellers realize they shouldn't expect a huge premium over that estimate and buyers accept a premium for the certainty of an immediate sale.


I'm not assuming nobody accepts a lower offer. Just that sellers will overpice FMV by some margin (as you confirm) and eventually accept the highest bid under list (seems confirmed too).


And I think this system makes more sense - pricing is more transparent and saves everybody a bunch of time.


It's kinda weird that it's not included yet. The schema.org spec has a field 'recipeYield' specifically for this purpose and it's present in the meatball example on the site. It should be quite easy for the author to add it.


This standard is not unique to Google. It's part of the Schema.org initiative to add semantics to the web. The full definition is here:

http://schema.org/Recipe

The biggest problem with it, imho, is the lack of a proper definition of ingredients. An ingredient is just a plain string containing the unit, amount and name and sometimes an extra note. Having a quadruple instead of a string would make this standard a lot more useful.


I was surprised to find that bbc.co.uk/food uses the Recipe schema. Made scraping it much easier when there was some talk about canning it a couple of years back.


    > I was surprised to find that
    > bbc.co.uk/food uses the Recipe
    > schema
Gotta say, having worked at the Beeb, this doesn't surprise me at all. Amazing what marginal-value technical itches can be scratched when commercial pressure is eased off...


There's a Belgian startup that matches flavors with a scientific method. https://www.foodpairing.com/en/home


He also did a funny TED talk on the topic. At just 14 minutes, it may be more manageable than a lengthy article.

http://waitbutwhy.com/2016/03/my-ted-talk.html


Thanks for the link, I'll watch it later.


And a great thing to put the next 14 minutes into if you're struggling for procrastination ideas. :)


IKEA, on the other hand, designed their stores specifically to keep customers inside as long as possible. If you don't know the shortcuts (or don't notice them), you have to wander through the whole showroom & marketplace. And just about everyone leaves with more than what they initially planned.


It only works once. My usual response to such practices is to not return to the store again (and probably buy online). They lose in the long run.


Maybe it does not work on you but it works with vast number of people.


The vendor experiment provides a nice solution to that problem. Check in the vendor directory into your own repository and you always have the required source code available, even after the original author removes his repository on github.


And lagging behind in terms of core OS updates. My last phones were Nexus phones and a Moto X because I knew they would get updates really quick (or quickish for the Moto X). I don't even want to try my luck with other vendors.


The one expert, because the others would not be able to reach a decision on which move to play.


In fact, no. A big group of average experts appears to be better than a single super expert. This is the principal justification for the success of AI in oil prospective (https://books.google.fr/books?id=6DNgIzFNSZsC&pg=SA30-PA5&lp...)


Counterpoint: https://en.wikipedia.org/wiki/Kasparov_versus_the_World

I think a key missing component to crowd success on real expert knowledge (as opposed to trivia) is captured by the concept of prediction markets. (https://en.wikipedia.org/wiki/Prediction_market) The experts who are correct will make more money than the incorrect ones and eventually drive them out of the market for some particular area.


That's no counterpoint because the World team (of which I was a member) was made up of boobs on the internet, not players of Kasparov's strength, which was the premise of the question you responded to.


The easy thing about combining AI systems is that they don't argue. They don't try to change the opinion of the other experts. They don't try to argue with the entity that combines all opinions, every AI expert gets to say his opinion once.

With humans on the other hand, there will always be some discussion. And some human experts may be better at persuading other human experts or the combining entity.

I think it would be an interesting thing to try after they beat the number 1 player. Gather the top 10 (human) Go players and let them play as a team against AlphaGo.


This is nonsense. To combine AI systems requires a mechanism to combine their evaluations. The most effect way would be a feedback system, where each system uses evaluations from other systems as input to possibly modify its own evaluation, with the goal being consensus. This is simply a formalization of argumentation -- which can be rational; it doesn't have to be based on personal benefit. And generalized AI systems may well some day have personal motivations, as has been discussed at length.


This reminds me of the story of the Game of the Century, with Go Seigen's shinfuseki. https://en.wikipedia.org/wiki/List_of_go_games#.22The_Game_o...

https://en.wikipedia.org/wiki/Shinfuseki


In this story, it's regulation that has put some companies at the merci of competitors. Small Startup Y has to hope that Big Corp X wants to license a patent for a fair $ amount. Startup Y has no way to negotiate a large fee down to an acceptable level, pay the expensive licenses or fight in court if they ignore the patent.

Considering two companies with patents, you get the Nash equilibrium where both companies go to court for patent infringement (assuming the plaintiff always wins). The payoff matrix could look like this, where the numbers are the changes in profits in %.

            | court    | not court
  ---------------------------------
  court     | -5 \ -5  |  -15 \ 10
  ----------|----------------------
  not court | 10 \ -15 |  1   \ 1
  ---------------------------------
Assume the profit of the previous year is 14%.

  - If no company goes to court, both profits grow slightly with 1% to 15%.
  - If one company goes to court, and the other doesn't, the plaintiff's profit increases with 10% to 24% and the defendant's drops with 15% to -1% (placing it a loss).
  - If both companies go to court, the profits of both companies drop with 5% to 9%.
If you want a real-life example of this situation, just look at the lawsuits between Samsung & Apple. Probably the only case where they'll evolve to the pareto optimal solution, is when patents no longer exist at all.

A world without patents would allow companies to win customers based on pure attractiveness and features, not by getting competition banned. Note that this isn't the same as giving your key designs away for free. It means I could reverse engineer an iphone touchscreen and create a compatible version, without being sued by anyone. Apple would still have the early mover advantage with existing customers; the consumer would have more/better/cheaper choices and i would have a more attractive product. (Note that this might not be the case for other sectors, like pharma, where it costs a lot to develop a new drug from scratch, rather than the cheaper additive innovations in the tech sector).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: