Hacker Newsnew | past | comments | ask | show | jobs | submit | sklarsa's commentslogin

In my experience as someone who started learning how to golf in their 30s, you need to be playing at least 4x a week to get good enough to start enjoying it in the first place. Unless you like shanking balls 5 yards, looking for lost balls in the woods, or picking your ball up near the green because the rest of your group has already finished the hole. Which to me, is no fun at all.


Thing is though, the more you play, the more expectations you have (or don't have depending on your seriousness).

I used to play frequently, and would be constantly unhappy with my round because I put effort into the game. Due to costs increasing, job being more demanding, and just having other things to do, I've golfed very little this year.

I've played 2 full rounds this year, spent very little time on the range (much more on the putting green, as my residential building has a small turf green that I can just noodle around on at any time) and expected zero from each round.

Ironically, those two rounds have been by and far the best rounds I've ever played in my life. For one of those rounds, I actually took a small-ish but still decently sized dose of magic mushrooms. 2 of my playing group were serious golfers and completely sober, and they were blown away by how relaxed i was when i was tripping. I was calm, relaxed, and enjoying my golf but still completely locked in and focused, and still tripping. I was like +6 through the front 9 from back tees, which in my book is fucking amazing as I generally shoot low 90s.


Tripping guy, "I was+6"

Sober partners, "Dude you were+21"


The real goal for a bunch of golfers is to get to the point where they're not the slowest guy on the team and stop there. At that point you can network on the field without annoying the people you're trying to schmooze, but also not show them up. It's about who you are playing with, not the game itself.


I also picked it up in my 30s - twice a week for a year with lessons along the way is enough to get you to sub 100, and it’s perfectly fun to play at that level. Going lower than that does take more effort though.


I've been using Incus containers (not VMs) for running tests against a "real" OS and it's been an absolute game changer for me. It's granted me the ability to simultaneously spin up-and-down a plethora of fresh OSes on my local dev machine, which I then use as testing targets for components of my codebase that require Docker or systemd. With traditional containers, it's tricky to mimic those capabilities as they would exist on a normal VM.

Because both my project and Incus are written in Go, orchestrating Incus resources in my test code has been pretty seamless. And with "ephemeral" containers, if things start to get out of hand, I just need to stop the container to clean it up. Much easier than a 2-step process like it usually is.

Looking forward to seeing what's to come in IncusOS!


I'm very surprised to see all of the negativity toward Cloudflare's usability and value here.

It's been relatively painless for me to set up tunnels secured by SSO to expose dashboards and other internal tools across my distributed team using the free plan. Yes, I need to get a little creative with my DNS records (to avoid nested subdomain restrictions), but this is not really much of a nuisance given all of the value they're giving me for free.

And after paying just a little bit ($10-20 per month), I'm getting geo-based routing through their load balancers to ensure that customers are getting the fastest connection to my infra. All with built-in failover in case a region goes down.


> I'm very surprised to see all of the negativity toward Cloudflare's usability and value here.

As someone who uses Cloudflare at a professional level, I don't. To me each and every single service provided by Cloudflare feels somewhere between not ready for production or lacking any semblance of a product manager. Everything feels unreliable and brittle. Even the portal. I understand they are rushing to release a bunch of offerings, but this rush does surface in their offerings.

One of my pet peeves is Cloudflare's Cache API in Cloudflare Workers, and how Cloudflare's sanctioned approach to cache POST requests is to play tricks with the request, such as manipulate HTTP verb, URL, and headers, until it somehow works. It's ass-backwards. They own the caching infrastructure, they own the js runtime, they designed and are responsible for the DX, but all they choose to offer is a kludge.

Also, Cloudflare Workers are somehow deemed as customizable request pipelines, but other Cloudflare products such as Cloudflare Images service can't be used with Workers as it fails to support forwarding standard request headers.

I could go on and on, but ranting won't improve anything.


Post requests aren't really meant for repeatable stuff though. Even browsers will ask for confirmation before letting you reload the result of a post request. I think you are holding it wrong.

Now I get it things happen and you gotta do what you gotta do but then you aren't on the happy path anymore and you can't have the same expectations.


> Post requests aren't really meant for repeatable stuff though.

That's simply wrong. Things like GraphQL beg to differ. Anyone can scream this until they are red in the face but the need to cache responses from non-GET requests is pervasive. I mean, if it wasn't then why do you think Cloudflare recommends hacks to get around them?

https://developers.cloudflare.com/workers/examples/cache-pos...

Your blend of argument might have had a theoretical leg to stand on if Cloudflare didn't went out of it's way to put together official examples on how to cache POST requests.


The Cache API is a web-standard API. We chose to follow it in an attempt to follow standards. Unfortunately it turned out to be a poor fit. Among other things, as you note, the "cache key" is required to be HTTP-request-shaped, but must be a GET request, so to cache the result of a POST request you have to create a fake GET request that encodes the unique cache key in the URL. The keys should have just been strings computed by the app all along, but that's not what the standard says.

We'll likely replace it at some point with a non-standard API that works better. People will then accuse us of trying to create lock-in. ¯\_(ツ)_/¯


> The Cache API is a web-standard API. We chose to follow it in an attempt to follow standards.

That's perfectly fine, but it doesn't justify the lack of support for non-GET requests though. The Cache API represents the interface but you dictate what you choose how to implement it. In fact, Cloudflare's cache API docs feature some remarks on how Cloudflare chose to implement some details a certain way and chose to not implement at all some parts of Cache API.

https://developers.cloudflare.com/workers/runtime-apis/cache...

Also, the Cache API specification doesn't exclude support for non-GET requests.

https://w3c.github.io/ServiceWorker/#cache-put

If Cloudflare's Cache API implementation suddenly supported POST requests, the only observable behavior change would be that cache.put() would no longer throw an error for requests other than GET. This is hardly an unacceptable change.


We can't implement automatic caching of POST requests because there is no standard for computing cache keys for POST requests; it's different for every application.

E.g. presumably the body of the request matters for cache matching, but the body can be any arbitrary format the application chooses. The platform has no idea how to normalize it to compute a consistent cache key -- except perhaps to match the whole body byte-for-byte, but for many apps that would not produce the desired behavior. For example, if you had a trace ID in your requests, now none of your requests would hit cache because each one has a unique trace ID, but of course a trace ID is not intended to be considered for caching.

The Cache API can only implement the semantics that the HTTP standard specifies for caching, and the HTTP standard does not specify any semantics for caching POST requests.

That said, what we really should have done was left it up to the application to compute cache keys however they want, and only implemented the lookup from string cache key -> Response object. That's not what the standard says, though.


Cloudflare’s whole admin console is rather incoherent. The tunnel configuration makes very little sense unless you understand the product in quite a bit of detail, and the docs don’t help. And sites, tunnels, DNS config and such are entangled in bizarre ways. Oh, and there’s console zero and one, and they seem to reroute to each other basically arbitrarily :(

It would be awesome if there was a way to view the console that actually reflected how a request routed through the system.


I really wanted to love Cloudflare, even invested in it a couple years ago I was so confident in their vision. But...

- They won't tell you at what point you will outgrow their $200/mo plan and have to buy their $5K+/mo plan. I've asked their support and they say "it almost never happens", but they won't say "It will never happen." HN comment threads are full of people saying they were unexpectedly called by sales saying they needed to go Enterprise.

- There are no logs available (or at least weren't 6-9 months ago) for the service I proxy through Cloudflare at the $200/mo level, you have to go with Enterprise ($5K+ I've been told) to get logs of connections.

- I set up some test certs when I was migrating, and AFAICT there is no way to remove them now. It's been a year, my "Edge Certificates" page has 2 active certs and 6 "Timed Out Validation" certs, I can't find a way to remove them.

- The tunnel issue I had on Friday trying to set up where my tunnel, more details in another comment here but apparently the endpoint they gave me was IPv6 only and not accepting traffic.

- Inability to set up a tunnel, even to test, on a subdomain. You have to dedicate a domain to it, for no good reason that I can tell.


Usually I write some IaC to automate this tedium so I only have to go through the IAM setup pain once. Now if requirements change, that's an entirely different story...


So the problem when you combine IAC with CI/CD is that the role assumed by the CI agent needs privileges to deploy things, so you need a bootstrap config to set up what it needs. If you have a mandate to go least-privilege, then that needs to include only the permissions strictly needed by the current deployable. So, no "s3:*", you need each one listed.

So far so good, you can do this with a bootstrap script that you only need to run at project setup.

If you also have a mandate (effectively) to go fully serverless, then as your project evolves and you add functionality, what you find is that most interesting changes use something new in the platform. So you're not getting away with running the bootstrap script once. You're updating it and running it for almost every change. And you can't tell in advance what permissions you're going to need, because (especially if you're on terraform) there's apparently no documentation connecting the resources you want to manage and the permissions needed to do so. So you try to deploy your change, IAM pops an error or two, you try to figure out what permissions you need to add to the bootstrap script, you run it (fixing it when it breaks at this point), you try deploying again, IAM pops another couple of errors, and then you're in a grind cycle which you can't predict the length of - and you need to get to the end of it before you can even test your feature, because fully serverless means you can't run your application locally (and getting management to pay for the pro localstack licence is a dead end). At some point it won't be clear why IAM is complaining, because the error you get makes no sense whatsoever, so at that point it's off to support to find out a day later that ah, yes, you can't use an assumed role just there, it's got to be an actual role, and no, that's not written down anywhere, you've just got to know it, so you need to redesign how you're using the roles completely, and right about this point is when I usually want to buy a farm, raise goats, and get way too into oil painting, instead of whatever this insane waste of life is.


Personally, I find the OSS Grafana and Loki experience to be a bit maddening; both as an end-user and admin.

From the admin side, I usually end up deploying Loki with Helm charts in k8s clusters. This makes it difficult to tweak specific settings, especially around Loki's varying deployment models and "power-user" options. Helm also makes upgrades a bit scary without rigorous testing in staging environments that have version parity with prod across all infra tools. The same goes for Grafana, which I usually deploy alongside Prometheus, tightly-coupling that entire part of my observability stack.

As for end-user experience, Loki's storage and query model seems to be designed for log aggregation across multiple microservices with structured logging and proper tagging. But sometimes, I just want to read through application logs, and batch-querying through grafana's "explore" interface doesn't make it any easier.

What are people using for log aggregation these days? The author mentions a central syslog server, and that doesn't sound like the worst idea to me...


> but a single word used in a vague enough way is enough to skew the results in a bad direction

I'm glad I'm not the only one who feels this way. It seems like these models latch on to a particular keyword somewhere in my prompt chain and throw traditional logic out the window as they try to push me down more niche paths that don't even really solve the original problem. Which just leads to higher levels of frustration and unhappiness for the human involved.

> Anecdotally, I've felt my skills quickly regressing because of AI tooling

To combat this, I've been trying to use AI to solve problems that I normally would with StackOverflow results: for small, bite-sized and clearly-defined tasks. Instead of searching "how to do X?", I now ask the model the same question and use its answer as a guide to solving the problem instead of a canonical answer.


Set up your dev environment and get the project running locally. This can take anywhere from 2-3 hours to a week. But if it does take towards the longer end of that period, you will have learned tons about the application's architecture, past design decisions, and will probably have made at least 1 git commit to a README somewhere.


I love the ending of this story, which isn't obvious from just looking at the title. The author identified key pain points around customer support, automated them, and went back to enjoying life. This is the kind of thing that gets me excited about the possibilities of technology and AI as a force multiplier, especially when working on side projects, "lifestyle" businesses, or even startups as a single founder.


No one wants to talk to an AI for customer support.


i've gone back and forth on this over the last few months.

I started out thinking that we've all been conditioned by bad customer support chatbots whose only purpose is to look up facts from the FAQ and then tell you to call the real customer support line to actually handle your problem. the problem was that the chatbots weren't granted hee ability and authority to actually do things. wouldn't it be great if you could aks a bot to cancel your account or change your billing info and it would actually do it?

but then i realized... anything with a clearly defined process or workflow like that would be even better if it were just a form on an account settings page. why bother with a chatbot?

customer support lines run by humans exist for two reasons: - increase friction for things you don't want your user to do (like cancel their account without first hearing a bunch of sales pitches) - handle unanticipated problems that don't fit into the happy-path you've set up on the settings page

My worry is that business dudes will get excited about making chatbots that can do the former and they'll never trust an AI to be able to handle the later. So I'm now of the opinion that having AI customer support will only be used to make things worse.


Customer support isn't paid well, so they often aren't motivated to become very skilled beyond the level of a chatbot before they move on to other things. So the interface to bad docs doesn't matter much. And good docs are very hard to produce. AI magnifies problems when good docs are lacking.


> aren't motivated to become very skilled beyond the level of a chatbot

Everyone has some amount of common sense. The current state of the art does not, so it cannot make decisions. This is why these things can't currently replace real support beyond being a search function exceedingly capable of interpreting natural language queries and, optionally, rephrasing what the found document says to fit onto the query better

You can't even have these systems as first line support, verifiying whether the person has searched the docs because you can't trust it with a decision about whether the docs' solutions are exhausted and human escalation is needed. There currently simply needs to be a way to reach a human. I'm as happy as the next person to find that a no-queue computer system could solve my problem so I use it when my inquiry is a question and not a request, but a search function is all they are


Chatbots are loaded with issues. But I have also had a lot of issues with humans.

By the time I have an issue, I have usually covered basic ideas and FAQs already. Currently, I tend to use perplexity supported by ChatGPT before engaging online tech support, and I create a document for them before beginning.


> Customer support isn't paid well

"We choose not to pay customer support well".

I've worked at companies where customer support was both strongly supported, paid well and given the tools to do their jobs well.

They were incredible.


Microsoft used to have some managers do support calls. Absolutely amazing service.


There's a third case: dealing with folks who just aren't technically savvy enough to figure some things out on there own, no matter how intuitive, well documented, or fully featured your product is.

I think I'd rather troubleshoot with a well-scripted AI chatbot, than a human being who's forced into the role of an automaton - executing directly from a script. Just, FFS, let me escalate to an actual competently trained human being once I've been through the troubleshooting.


Actually, I do.

There's no wait in line. There's no waiting 2 min for each response in chat, or waiting 5 min on hold while the rep figures out what to do. And I've, shockingly, gotten issues resolved faster and better.

Using one semi-popular consumer app -- once it pointed me to docs on their site that Google wasn't finding because I didn't know what keywords to use. And twice it escalated me to send a message to the relevant team, where I got a response that addressed my problem -- and where escalation would have been necessary with a human call-center rep anyways.

The point is that it was far, far faster than any chat rep OR phone rep. And it's far faster to escalate too.

I'm sure this experience isn't universal, but I've been truly shocked at how it's turned what are otherwise 15-20 minute interactions into 3 minute interactions. At the same level of quality or better.


Then you get situations like this one where the AI invented a non existent policy (which the airline did not want to honor)

https://arstechnica.com/tech-policy/2024/02/air-canada-must-...


There's a non-zero chance that real humans working as customer service agents will invent facts, too (whether to try and be helpful about something they're not completely sure, or just to get a problematic customer to leave them alone)


Humans get things wrong, but are less likely to generate novel facts.


I've recently encountered one that just sends you in a loop, and there is literally no way to actually speak to a real person. Unless you want to give them more money; they're very responsive in that case.

This is a billion-dollar company you have definitely heard of.


Why don't you just name the company?


I'm guessing Amazon?


Definitely Amazon


Legal reasons.

(It's not Amazon.)


I've had exactly one AI chatbot point me to the right documents. All the other interactions were exercises in frustration, and I've canceled more than one product due to shitty AI support. When I have a question, if an automated system could handle it, I wouldn't have a question.


I would if the AI chatbot could ever actually answer my question, but the conversation ALWAYS goes like this:

Bot: Welcome! Please tell me what you'd like help with.

Me: how do I X?

Bot: Please choose an option: [list of irrelevant things].

Me: None, I need to do X.

Bot: Okay, please try [instructions copy-pasted from the docs].

Me: I already read the documentation and it didn't answer my question, that's why I'm here.

Bot: I'm sorry, I don't know how to help.

Me: Can I speak to a human?

Bot: Welcome! Please tell me what you'd like help with.


There's also no useful output whatsoever if you actually tried any troubleshooting yourself.

Never has a chatbot been of help to me.


Doesn't need to be AI, most customer support was already automated before ChatGPT rose to prominence. Hell, I developed a mobile website once for a power company that was basically a wizard / checklist of "Have you checked for known outages? Have you checked your breakers? Have you checked if your neighbours have issues too?" before they were shown the customer service number.

Human contact doesn't scale, or is prohibitively expensive. I sat with customer support a while ago (again energy sector, but different company) to observe, and every phone call was at least ten minutes, often 20-30, plus some aftercare in the form of logging the call or sending out a follow-up email.

They also did chat at the time, where a chatbot (which wasn't ChatGPT / AI based yet but they're working on it) would do the initial contact, give low-hanging fruit suggestions based on their input, and ask for their information like their address before connecting to a real human. The operator was only allowed to handle two chats at a time, and each chat session took about half an hour - with some ending because the person on the other side idled too long. I mean granted, the UI wasn't great and the customer service guy wasn't a fast typer, but even then it was time-consuming and did not scale. They had two dozen people clocked in, if they were all as fast as this one person, they can handle 50 support calls an hour at most.

It does not scale. This was for a company with about 2.5 million users who rarely need customer support. Compare with companies like Google or Facebook that have billion(s) of users. They automated and obfuscated their customer support ages ago.


24 people on 2.5 million users and you say it doesn't scale?


    2.5 million users : 24 support staff
    1 billion users   : 9600 support staff
If it scales linearly, that's about 10k support per billion users. I was going to say that a 10,000 person department for handling customer support sounds like it doesn't scale, but maybe I'm wrong, given that that is only about 5% of google's headcount.


Also in terms of costs: if those support staff cost 100 grand a year in salary and other costs, staffing the 2.5M-user company with those 24 support crew 24/7 (3 shifts, let's pretend it's equally busy at 3AM) results in some 25 cents per month per user that need to be priced into the product. The transaction fees on a monthly billing system are likely higher than that of a skilled support team if this is a representative scale for the industry

I frankly doubt the numbers, surely it costs more than this for an average company?


People want their support solved as quickly as possible. They don't want to talk to AI support bots because it's just an inefficient, error-prone wrapper over the documentation, which if you have an actual support need (as opposed to "I just haven't read any of the documentation") that kind of AI support isn't going to be helpful.

If you have an AI customer support that can actually support customer service requests and provide resolution, people will use it and be happy about it, or at least indifferent.


This will depend on your product. I have a side project where I get a few support calls per day. 95% of the calls can be handled by just quoting documentation/FAQ verbatim. The customers are typically not very sophisticated computer users.


People who can understand what the AI is saying don't need the AI have problems the AI is too dumb and powerless to solve.

People who can't read the documentation aren't going to understand the AI's bad or even good summary of the documentation


Broadly, I agree. And I am furious with Progressive insurance for requiring a smart phone/mobile app to file roadside assistance claims, and my inability to get someone real on a call.

But,

In this particular story, the people were asking questions that were answered in the instructions.

No one wants to waste their time answering stupid questions, particularly if they are a solo small shop who gets entitled people asking questions around the clock.


If it’s well-implemented, it’s fantastic.

This isn’t really customer support, but prisma (popular typescript ORM) has an AI that can answer just about any prisma-related question. It’s got a great RAG setup, can help think through novel scenarios, and always refers to specific docs pages and GitHub issues.

I think it’s made by a company called kapa. Those guys are gonna go far. That thing works SO well. I’ve been imagining how good life would be with a prisma-style AI docs assistant for things like massive, obtuse google APIs.


I want to talk to an AI for customer support as the first line so long as there is always a "Talk to a human" escape hatch.

And for less than about $50 a month, I understand why they need to spend less than half an hour per month to retain me. It'd be net negative profit otherwise. (unless they offshore, in which case the math is only slightly better).


And, yet, millions already do. The point of AI for customer support is to handle the very simple requests (maybe half). The rest, you can escalate. If AI doesn't know what to do, "Hmm, I'm not sure. Let me escalate your question/request to my manager." For most normies, this will work well.


No one wants to perform customer support either. Generally, people who are smart and capable of offering good support will stop doing it because there are more fruitful and enjoyable things for them to do.


I usually agree, but Lemonade (insurance) has an amazing support bot.


uh, I beg to differ. I felt like an autocomplete with a knowledge base and "direct links to the right email forms" would have been faster than the fake chat interface that the "bot" uses.

(Also, if you own a home in NY and use lemondade -- do know that they don't cover cast iron piping (extremely popular in NYC). I found that out at renewal...)


I am implementing a support system for my side project, which combines the knowledge base (FAQ) with the chatbot. You can access all the answers by browsing the FAQs. If you want to contact me, you first talk to the chatbot, which has been prompted to only answer based on what the FAQ says. If it cannot answer based on that, it will make sure all the details and problem description is there, and then forward the ticket to me. In other words, chatbot is the first line of support.


Cast iron water supply piping, cast iron gas supply piping, or cast iron drain/waste/vent?


the AI bot asked "cast iron plumbing" IIRC. Which in NYC -- Cast iron DWV is prominent, can't speak to the others.


Eh... I think there's a balance to be struck. You could leverage AI to handle the initial messages (90% of which are tire kickers or scammers) and funnel worthy exchanges to continue the conversation manually.


Once people notice AI is responding they will skip it and will request to talk to a human. AI will look the same as FAQs or Chatbots, people don't want to interact with them, they want a human being that is able to understand their problem exactly as it is.


The right pattern is to put them directly in a queue to talk to a person, but have an system (AI or otherwise) in the queue to gather the minimal information. Like having the person explain the problem (and have something transcribe it) and have the system transfer them to the appropriate team after parsing their problem.

Or for really common cases (ie. turn it on and off, you're affected by an outage, etc), redirect them to an prerecorded message and then let them know that they are still in the queue and can wait for a person. 9/10 it'll solve everything, but also reduce friction of simple things that might be answered.


Most chatbots are both useless and tedious to interact with. But I've also had plenty of interactions with human first-level support that's just following a script without any actual understanding. An AI would be able to provide a genuine improvement over that.

AI isn't an improvement for companies that already provide great customer support, but it has the ability to seriously raise the bar for companies that want to keep customer support costs low or that have a lot of trivial requests that they have to deal with cost-effectively


That is exactly what is happening at my employer, and it’s been really effective for trivial support, especially when it’s empowered to make meaningful changes on the customer’s behalf. It’s got large swaths of the whole UX in chat, with an authenticated session. You could see it being better a better experience than clicking around anyhow. It does a great job at search too. Lots of room to improve but it’s hitting its targets for reducing human support time and as a sales tool.


Maybe not.

But there are many ways in which AI can improve or help support. So even if "AI chat support" turns out to not work, AI can still be very helpful in automating support.

Like detecting duplicates, preparing standard answers, grouping similar requests, assigning messages to priorities and/or people and so on.


That's not what "AI" means now. "AI" now means LLM babble


Even LLMs can do many of what I mention. Categorizing, grouping, assigning prios etc. maybe not as good as dedicated AI trained for this purpose only (I guess many could be "simple" bayesian filters even) but good enough and readily available.


No one needs to know it's one ;)


Would you like to work with a business that treats your time and problems this way?


The only thing I care about is are my problems solved for minimal effort and time invested on my part. Whether it's AI or human doing the solving, I don't care.


Bots are designed to waste your time and save the company time:

- either it has no authority and is simply restating publicly available information

- it has authority but only after pressuring it. In which case I have to spend time to find the right combination of things to get what I want

If I’m calling it’s because I have a special need that is unlikely to be resolved by a flow chart.


Much better than an unengaged, unempowered exploited human.


Sure, write an FAQ and usability test your software. But I'm not convinced that you can automated/AI away your support burden in any meaningful way that isn't going to piss off your customers.


Yes it's great writing. But it's not really about automating I feel (please chime in author OP?). To me he wanted to get away from customer email ghosting and disputes. He chose to change the customer support approach and create customer service tools to manage the common requests programmatically. I feel from the writing that his original vision, or continuing to extend the product and scale it, has now changed to maintaining it as is. He realizes customer requests and the time/disappointment of all that grows linear to revenue and does not want to do that any more.


Zola, GitHub and Netlify. Easy and free


I always watch with subtitles in general because the dubs seem to lose the emotion of the scene. The downside being that I can’t multitask and have to pay full attention to what’s happening on the screen. Why do you prefer dubs?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: