Hacker Newsnew | past | comments | ask | show | jobs | submit | fuziontech's commentslogin

I'm glad this exists but at ~1k EUR I would be interested if it could scan 120 medium format negatives...but the fact that it does not is an absolute deal breaker for me. It seems like they are considering it. I hope they do figure that out sooner than later.


Medium is the problem. There’s nothing.

Epson stopped making their flatbeds that do film, reportedly because they can’t get the CCDs anymore. That may be a rumor.

The result is they go for 2x MSRP on eBay for models that are many years old. Because that’s all that exists.

Without that, you can buy the kind of scanner meant for a photo lab ($$$$$), DIY it with a DSLR ($$$ if you don’t have one), or pay your a lab a lot per roll and hope they do a good job.

I’m not saying it’s a giant market but it certainly seems to me like there’s enough of one that it could support a small product.

You can get brand new Plustek OpticFilm scanners for 35mm and smaller starting around $300, and there are plenty of other options above that. Plus the DIY.

I’m sure 35mm is easier to make and certainly a bigger market but it’s also a lot more crowded.

I expect their specs are far better than the $300 one I’ve mentioned, I don’t know enough to know. But medium format people are desperate for anything.


> Epson stopped making their flatbeds that do film, reportedly because they can’t get the CCDs anymore. That may be a rumor.

Wow, you weren’t kidding, I completely missed this. I bought one, sold it, then bought and currently own another. I better baby it, there’s really nothing like it out there.


I tried some scanning on a Plustek 8300, which is supposed to be the fastest. The process is still extremely manual/slow and I don't think it's practical on a large scale. Many families who owned cameras in the 60s-70s-80s-90s will have potentially thousands of negatives to scan, but I don't see a solution that will automate that digitalization process.

Software could also use some improvement. Automating batch correction and clean up should be easier, IMO.


This really isn’t my area but it sounds like nothing is fast. DSLR may be fastest without just flat out hiring someone else to do it. But even with thousands of shots that would still take quite a lot of time.

And yeah, workflow is the thing that seems the worst. That seems like a great place to try to improve things to get a sale.


Not that they're cheap, but you can get Imacon scanners for much less than they retailed for. I inherited a Flextight Precision II and it still does a great job.


Do you use it with an old computer, or do you have a good way to interface a SCSI scanner with a modern machine? I tried to get my Precision II up and running but the SCSI card driver would crash Windows at random intervals.


https://web.archive.org/web/20250118205639/http://pathar.tl/...

This is way out of date. I have since been able to get it working on a Windows 11 4th gen Intel machine with 64-bit drivers cobbled together from a couple of versions of FlexColor and some .inf modification. It's not flawless, there's some major corruption that can occur when trying to use certain operations, but overall it works for my needs.


> I expect their specs are far better than the $300 one I’ve mentioned

It's not, it's actually quite a bit worse, especially with color reproduction.


>Epson stopped making their flatbeds that do film

I don’t understand this… the V850 Pro is still being publicly sold with film and slide trays… what am I missing?

And even eBay is selling them at only about $1,200-1-700 CAD, which is considerably cheaper than the 2,000+ CAD MSRP.


All the cheaper models that used to exist are gone. So the entry price went from like $300 MSRP to something much higher.


Are you taking about other models like the v600 and v700? Because I’m not. I’ve never really seen those models with film scanning tools, it was usually only the v850 that came with film scanning tools. So I have always expected to have to stump up for a v850 if I wanted to do film scanning.


I don't shoot 120, only 35mm. But I thought you could get away with a high end flatbed scanner for 120 negatives?


From what I’ve seen people mostly used the Epson scanners like the V600, V700, V850, etc.

They stopped making them early this year. Only the top end model for $1500 still exists and I don’t know if that’s because they still make it or just that there is still stock left at Amazon/etc.


but is it LTS?


It's a "JS framework" - it's sorta forbidden to be LTS in JS world


Sounds like something that might show up on a DeskHog Pro


This was literally something someone did on the side between other stuff. We don't use Jira.


I stand corrected. Sorry for cynical take


It is :3 It's exactly how you think it would be like working here.

And we are hiring. Come join us! https://posthog.com/careers


Okay I am definitely applying, but I need to build one of these first!


Please help me with ClickHouse.


lol cheeky reply aside (I lead the ClickHouse team here at PostHog). We have two of the largest self hosted ClickHouse clusters out there (and growing rapidly) each with Petabytes of data. We are working on really interesting solutions to problems that we would love to open source for the broader CH ecosystem:

- Declarative, cluster aware migrations

- Iceberg and S3 backed merge tree's on ClickHouse (as performant as local MergeTrees)

- ClickHouse on K8s, but on Metal instances w/ NVMe used for caching and compaction.

- Load shedding query scheduler and eventually query optimizer

We have a broad charter when it comes to ClickHouse. It's one of the things that enables us to build so many products that share the same underlying data and context. Our goal is to enable all developers at PostHog to continue building these products quickly as the team and user-base scales.

It's honestly an exciting time to be working at this company and on the beating heart of what drives it. Chat with me if you have any questions!


You got this, best of luck! Engineers with ClickHouse knowledge are def in short supply right now.

To job seekers, this is your sign. Learn about how ClickHouse works. If you know SQL, you're halfway there. The other half is learning ClickHouse's data storage model which is what gives it it's efficiency.


find god


I pray to Alexey every day.


Using ClickHouse is one of the best decisions we've made here at PostHog. It has allowed us to scale performance all while allowing us to build more products on the same set of data.

Since we've been using ClickHouse long before this JSON functionality was available (or even before the earlier version of this called `Object('json')` was avaiable) we ended up setting up a job that would materialize json fields out of a json blob and into materialized columns based on query patterns against the keys in the JSON blob. Then, once those materialized columns were created we would just route the queries to those columns at runtime if they were available. This saved us a _ton_ on CPU and IO utilization. Even though ClickHouse uses some really fast SIMD JSON functions, the best way to make a computer go faster is to make the computer do less and this new JSON type does exactly that and it's so turn key!

https://posthog.com/handbook/engineering/databases/materiali...

The team over at ClickHouse Inc. as well as the community behind it moves surprisingly fast. I can't recommend it enough and excited for everything else that is on the roadmap here. I'm really excited for what is on the horizon with Parquet and Iceberg support.


100% agree. One of the biggest assets we had at <driver and rider marketplace app> was the data we collected. We built models on it that would determine how markets were run and whether drivers and passengers were safe. These were key features that enabled us to bring a quality service to customers (over ye ol' taxi). The same applied to the autonomous cars, bikes, and scooters. We used data to improve placement of vehicles to help us anticipate and meet demand. It was insane how much data used to build these models.

To say big data is dead sounds to me like someone desperate for eyeballs.

I do think there is a huge opportunity for DuckDB - running analytics on 'not quite big data' is a market that has always existed and is arguably growing. I've seen way too many people trying to use Postgres for analyzing 10 Billion row tables and people booting up an EMR cluster to hit the same 10 Billion rows. There is a huge sweet spot for DuckDB here were you can grab a slice of the data you are interested in, take it home and slice and dice it as you please on your local computer. I did this just this weekend on DuckDB _and_ ClickHouse!

Disclaimer: I work at a company that is entirely based on ClickHouse.


Didn't know that Posthog is based on CH these days. Interesting!


Check the list of companies using ClickHouse: https://clickhouse.com/docs/en/introduction/adopters/


Really neat that you scour job postings to learn useful intelligence about companies using your product. I do this too :)

I'm curious how you have this set up. Is it currently a manual process or you use social monitoring tools to help you find mentions of ClickHouse in the wild?



Thanks for the reply :-) but your link is only for tracking mentions on the HN website.

I was asking about how they are able to track mentions, across the web, of companies using ClickHouse. This type of info is usually listed in the tech stack section of job descriptions (and these links tend to expire once the position is filled).


PostHog | Remote (US/Europe timezones) | Senior Backend Engineer, SRE & more! https://posthog.com/ PostHog is open-source product analytics. Graduated YC W20, we were the most popular B2B software HN launch since 2012. Our GitHub repo [0] has 9k stars and a growing and active community. We've raised significant funding, default alive, and are growing quickly. We're 30+ people. Our stack is Django/React/Redux (Kea -- main contributor works at PostHog too!).

Our mission is to build tools that will increase the number of successful products that exist in the world.

We are looking for Backend Engineers & SREs to take care of scaling to 1 Trillion+ events, both for our cloud offering and larger customers who are self hosting. If you are interested in processing data reliably at scale and love databases like ClickHouse, this is your role!

We have a culture of written async communication (see our handbook [1]), lots of individual responsibility and an opportunity to make a huge impact. Being fully remote means we're able to create a team that is truly diverse. We're based all over the world, and the team includes former YC founders, CTOs turned developers and recent grads.

To apply see https://posthog.com/careers or email us careers@posthog.com

[0] https://github.com/posthog/posthog [1] https://posthog.com/handbook/


> We are looking for Backend Engineers & SREs

Hi, can't see any backend positions. Only full-stack (which seems more for the frontend) and others (data engineer, head of product, etc.)..


If PostHog sounds cool but you're a Product Designer and not an SRE, great news, we're looking for you too! See link above.


For anyone managing a k8s cluster and are fatigued with memorizing and reciting kubectl commands should definitely take a look at k9s[0]. It provides a curses like interface for managing k8s which makes it really easy to operate and dive into issues when debugging. Move from grabbing logs for a pod to being at a terminal on the container and then back out to looking at or editing the yaml for the resource definition in only a few key presses.

[0] https://k9scli.io/


I've used k9s every day for the last 6 months and it's really superior to everything if you have any vi-fu. It even plays nice with the terminal emulator's colour scheme. It's simply an all-around pleasant experience in a way no dashboard is.


I like Lens, as more of a GUIs fan, and very occasional k8s-er. It has saved me a lot of time.


Lens has been bought by another company, the same one that bought Docker, and they are not playing nice with the community.

Some people have forked it to remove the newly added trackers, the forced login, and the remote execution of unknown code, but I sadly guess that it will become a corporate nightmare and the forks will not be popular enough to really take over the development.


Which fork do you recommend?


You have a few OpenLens and one LibreLens and so far they only remove the newly added controversial features.



That’s the one I use but the fork owner does not seem interested in maintaining the whole project. His fork is mostly a patch.


Owner here, i have stated the current situation and why not make a full fork atm.

https://github.com/lensapp/lens/issues/5444#issuecomment-120...

Currently resolving other important issues like binary signing.


For those who use emacs, I'd also recommend the very nice `kubel` plugin - an emacs UI for K8S, based on Magit.


I had to look up k9s because I wondered what you meant by "curses like interface" - it couldn't be where my mind went:"f*ck_u_k8s -doWhatIwant -notWhatISaid"

And upon lookup I was transported back to my youth of ascii interfaces.



K9s made my learning of k8s way way way easier. I still use it every single day and I absolute adore it. The terminal user interface was so absolutely fantastic that it genuinely sparked my motivation to build more TUIs myself.


Do you pronounce it 'canines' or 'K-9-S'?


Logo is a dog, and everybody I know call it "canines"


I use k9s every day, love it. Only problem is that the log viewing interface is buggy and slower than kubectl get logs. Still love it though.


The larger the log buffer is, the slower k9s gets unfortunately. For me the builtin log viewer is useful for going back short time periods or pods with low log traffic.

You can work around it by using a plugin to invoke Stern or another tool when dealing with pods with high log traffic.


Can’t vouch for k9s enough, it’s great and I think it helped me to gain a much better understanding of the resource/api system.


k9s is by far the most productive tool in my toolshed for managing & operating k8s clusters.


Has anyone on AWS gotten k9s to work with Awsume [0] authentication? I miss using it but I can't auth to different AWS accounts and access my EKS clusters with it unfortunately.

[0] https://awsu.me/

edit: I figured it out! You need to use autoawsume which is triggered by awsume $profile -a


I've been using and recommending k9s to everyone and it just works. I love it and use it enough that I'm a sponsor.

It's an amazing project by a solo dev, please consider sponsoring. My guess is anyone using kubernetes can afford to get their org to shell out $10 for it.

(I'm not affiliated with k9s in any way except as a happy user)


It's been a smash hit at work. There is a bit of a learning curve, but nothing compared to kubectl.

: for changing "resource" types (and a few other miscellaneous things) / for filtering shift-? to see more commands


It's really better than any GUI I've used. I like it better than Weave Scope, Lens, and Rancher.


and, oh-my-zsh's kubectl plugin, to abbreviate more commands and which should allow for one to add the current cluster to their prompt with a convenient alias to switch between clusters (kcuc <config context>)


Works even better with fzf (aka "what's this subcommand called again ?").


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: