Hell yeah! I have been looking for something like this.
On the home page when I search for something like `Southeast` it shows 0 shops, but if I actually select it I get 223. This was confusing and made me unsure of how many relevant hits I would find.
It would be really nice if you could search in smaller geographic areas. State boundaries would be really nice to start. As you continue having city / intra-state regions would also be huge for the cases where you want to go in person to these places.
I looked at it more last night. It would be great if you expanded into other forms of manufacturing and related services.
For example, if I am having a complex part machined, I probably need some electronics to drive that things. Having a supplier for electronic parts, circuit board printing, plastic molding for housing those electronics would be needed to finish the job.
One of the reasons people choose webp is file size. If your images are smaller then you transfer and store fewer bytes. That can have benefits on cost and speed of transferring those to the client.
That is interesting. I have worked in the space too, and have seen similar attitudes.
I think it often comes down to who is responsible for making decisions with that data. If a product or business person is the one driving a feature, and looking for adoption, the engineers likely aren't going to be invested in building out sophisticated metrics. They get the metrics they are responsible for from their cloud provider (resource use/latency/scale).
I think that problem is compounded by the perception that these integrations are going to tank your products perf (may hurt the metrics engineers care about).
I think all of those dynamics change in really big companies with thousands of engineers. Then you can often end up in a situation where engineers are now required to maximize product metrics, and need visibility into their small slice of the pie.
So, I think its largely incentive, which is why I see all of the metrics vendors targeting product and sales people in small/mid sized companies.
That’s an interesting point. One thing I’ve noticed, though, is that even the people who are directly exposed to incentives (usually product and marketing) tend to focus almost exclusively on the final KPIs they’re measured on, like revenue or conversion rate.
Because of that, the analytics layer is often seen as something secondary. As long as the top line numbers are moving, there’s little perceived urgency to invest in a structured analytics foundation that explains why those numbers move.
So even when incentives exist, they’re often too outcome focused. Analytics that helps understand mechanisms, not just results, struggles to justify itself until something breaks or growth stalls.
I think that is because they are being judge by their outcomes.
In the space I was in (ads) users were highly mistrustful of the data. They felt everything was kind of fuzzy (eg how well are you measuring unique users and their actions).
They would end up using multiple vendors (and we would have to spend a lot of time comparing and contrasting results). They really really want "apples to apples" comparisons.
At the end of the day they were trying to answer, does what I am spending my money on give me the results the business needs? To your point there is a lot of nuanced data, but their bosses definitely only cared about the top line, did it move the needle.
I feel like that is part of their cloud strategy. If your company wants to pump a huge amount of data through one of these you will pay a premium in network costs. Their sales people will use that as a lever for why you should migrate some or all of your fleet to their cloud.
A few gigabytes of text is practically free to transfer even over the most exorbitant egress fee networks, but would cost “get finance approval” amounts of money to process even through a cheaper model.
It sounds like you already know what sales peoples incentives are. They don't care about the tiny players who wanna use tiny slices. I was referring to people who are trying to push PB through these. GCPs policies make a lot of sense if they are trying to get major players to switch their compute/data host to reduce overall costs.
You're off by orders of magnitude. A million tokens is about 5 MB of text and costs $0.20 to process in something like Gemini 3 Flash.
Hence, a terabyte of text would cost about $42,000 to run through a pareto-frontier "cheap" model.
The most expensive cloud egress fee I could quickly find is $185 per terabyte (Azure South America Internet egress).
Hence, AI processing is 200x as expensive as bandwidth, or put another way, even a long-distance international egress at "exorbitant cloud retail pricing" is a mere 0.5% of the cost.
Petabytes, exabytes, etc... just adds digits to both the token cost and bandwidth cost in sync and won't significantly shift the ratio. If anything, bandwidth costs will go down and AI costs go up because: output tokens, smarter models, retries, multiple questions for the same data, etc...
3. Take the good benefits, and hopefully good work life balance and maximize for time with your family. There is always going to be the next thing. Find ways to satiate your curiosity (like attending conferences), while savoring time with your kid because you aren't busy/burntout from grinding at a startup :)
Not who you asked, but I have a similar setup. I can run everything I need for local development in that image (db, message queue emulator, cache, other services). So, setting things like environment variables or running postgres work the same as they do outside the container.
The image itself isn't the same image that the app gets deployed in, but is a portable dev environment with everything needed to build and run my apps baked in.
This comes with some nice side effects like being able to instantly spin up clean work environments on my laptop, someone elses, or a remote vm.