Hacker Newsnew | past | comments | ask | show | jobs | submit | eru's favoriteslogin

> I used to like Lisp's homoiconicity. These days, I'm not even sure the concept makes any sense? What is it even supposed to mean?

You are not alone:

http://calculist.org/blog/2012/04/17/homoiconicity-isnt-the-...


It's not particularly new, but recently I've been involved a lot with solving large combinatorial optimization problems

https://en.m.wikipedia.org/wiki/Combinatorial_optimization

Algos to solve those both exact methods and heuristics are super useful in real world applications, and very underappreciated by the HN crowd IMO.


Did you open source yours? I started something like this here: https://github.com/gravypod/gitfs

I worked at Accenture as an MD for several years, primarily on innovation and transformation programs. I have plenty to say about them, but I think the key driving factor for all of the grift and awful performance has a lot to do with how they operate, which is to sell in a big program, then pull a switcheroo and try and pack a project with as many low-paid MBAs as possible – kids straight out of college tasked with a (thin slice) of a major strategic program, or find some sub to farm it out to at a really low price.

Since going out on my own as a consultant – focused on the same sort of growth programs, as opposed to audit – I generally find that I can achieve the same outcomes for a client with a handful of people on a a reasonable budget.

I left primarily because it's just bonkers how much pork these big consultancies manage to get away with packing on, to the point where it was a major reputation risk to me.

I'd encourage any CXOs out there seeking to outsource major strategic initiatives to consider hiring individuals or smaller entrepreneurs with experience inside the bigs, but without the downward pressure to get as many butts in seats as possible.


> most regexp libraries are stuck in the 80s/90s, not keeping up with recent developments in research.

Can you show me the research that states how to add things like complement and intersection to general purpose regex libraries?

Most regex engines are backtracking based, and in that context, adding complement/intersection seems pretty intractable to me.

For the small subset of regex engines that are based on finite state machines, it's pretty much intractable there too outside of niche regex engines.

In fact, the research[1] suggests that adding things like complement/intersection is quite difficult:

> In particular, we show that when constructing a regular expression defining the complement of a given regular expression, a double exponential size increase cannot be avoided. Similarly, when constructing a regular expression defining the intersection of a fixed and an arbitrary number of regular expressions, an exponential and double exponential size increase, respectively, cannot be avoided.

And indeed, as another commenter pointed out, "minimal DFA" is effectively irrelevant for any general purpose regex engine. Not only do you not have the the budget to build a DFA, but you certainly don't have the budget to minimize that DFA.

With respect to reversal, RE2 and Rust's regex crate both do that. But mostly as an internal strategy for finding the start-of-match when using a lazy DFA. It's otherwise a somewhat niche feature more generally.

With respect to JITs, plenty of regex engines out there do that. PCRE comes to mind. So does V8.

As a general purpose regex engine author, we aren't "stuck" in the 80s/90s. There are just some fundamental trade offs at play here that make your ideas difficult to support. It's not like we haven't thought about it. Moreover, for things like complement and intersection specifically, actually reasoning about them in regex syntax is pretty tricky! I'm not sure if you've tried it or not. (There are some niche regex engines that implement it, like redgrep.)

[1]: https://dl.acm.org/doi/10.1145/2071368.2071372


I completely agree with the points made here, it matches my experience as a C coder who went all-in on Rust.

>"Clever" memory use is frowned upon in Rust. In C, anything goes. For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED).

Ha!

>It's convenient to have fixed-size buffers for variable-size data (e.g. PATH_MAX) to avoid (re)allocation of growing buffers. Idiomatic Rust still gives a lot control over memory allocation, and can do basics like memory pools, combining multiple allocations into one, preallocating space, etc., but in general it steers users towards "boring" use or memory.

Since I write a lot of memory-constrained embedded code this actually annoyed me a bit with Rust, but then I discovered the smallvec crate: https://docs.rs/smallvec/1.5.0/smallvec/

Basically with it you can give your vectors a static (not on the heap) size, and it will automatically reallocate on the heap if it grows beyond that bound. It's the best of both world in my opinion: it lets you remove a whole lot of small useless allocs but you still have all the convenience and API of a normal Vec. It might also help slightly with performance by removing useless indirections.

Unfortunately this doesn't help with Strings since they're a distinct type. There is a smallstring crate which uses the same optimization technique but it hasn't been updated in 4 years so I haven't dared use it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: