Hacker Newsnew | past | comments | ask | show | jobs | submit | sheriff's commentslogin

Sounds like Netta was doing due diligence on the relationship.


And yet the reality is that there is no shortage of free pianos listed on my local Craigslist that no one has picked up yet.


Drake is nice for these kinds of data dependencies.

https://github.com/Factual/drake


I play both guitar and pedal steel. The tapping techniques that would be used on a horizontal 6-string guitar would be impractical on a steel guitar (lap or pedal) because the action is too high and there are no frets.


It's odd to me that the proposed Minimal Skeleton for tracking a bug doesn't include:

* who reported the issue * what they reported happening * what they expected to happen instead

For many reported bugs, it is clear what the actual problem is and what needs to be done to resolve it. In this case, it can feel like unnecessary overhead to track the bug in a separate system from the rest of the development work queue.

But it's worthwhile to have a place to track bug reports that can't yet be acted on by writing code. Sometimes it's not immediately clear what is happening or what should be done about it. Sometimes you need to reach out to the reporter(s) for more information or to let them know progress has been made.

Even with well-understood bugs, there can be a many-to-many relationship between the bug and the development work to be done. Sometimes you have many people reporting the same issue in different ways, and it's not initially clear that there's a shared root cause. Sometimes one bug report is best addressed in multiple phases of development work.

I'd argue that there's always someone who's managing all of the above, even if they're not using separate bug tracking software to formally do so. But as a team grows, making the investment to "show your work" a little more can make it a lot easier to collaborate and delegate.


Thanks, this made me think. My first knee-jerk reaction was something about things and implicitness, but after some consideration:

* I am not sure "who reported the issue" is actually important, but rather "who is the contact person to get more information about this issue" and "who should be informed when this is done for verification", where "verification" means "satisfies expectations now" (as opposed to QA meaning of the word)

* Reported happening/expectation is a good point for any expectation mismatch report. I am not sure it should be there for all issues though as I find it can decrease clarity in the same way artificially requiring the "As X, I need to do Y in order to achieve Z" does


Another reason to include those fields is that reported bugs are often a case of misaligned expectations between those designing, implementing and testing a feature.

If you're forced to describe what happened and what you expected to happen it makes it very clear when those expectations do not align with those of the people designing or implementing it.


I came here to post something similar. I'll add that many of the hard things we do with Git seem to do with re-ordering or re-combining of the underlying changes. If we want to make it easier to reason about changes to a set of changes, then I think we really want those changes to have some properties which they don't currently have.

It's powerful for Git to treat changes as line-by-line text diffs, because it allows us to manage changes to any textual data. But what if, instead, we borrowed an idea from distributed databases, and implemented all changes as commutative operations on a Conflict-free Replicated Data Type (CRDT)?

I think almost every example of difficult rebasing would get significantly easier, but at what cost? We'd have to completely rethink how we write programs, because this would drastically limit the types of changes to a program that were valid. I wouldn't be surprised if this would require us to develop in entirely new languages.

There might be some meat to this idea, but again, I don't think we'd get there by mining existing Git graphs.


Git doesn't operate on diffs. It stores full content using delta compression. Subtle difference, but it can create ours reverse merges that don't have a diff, but radically change the content of the repo.

What you're talking about is patch theory, which is used by darcs and pujil. Pujil does a better job of explaining the theory.

At the end of the day, the point of version control is to keep a universally consistent snapshot of a sequence of bytes. Patch theory only tells you how to resolve conflicts. TreeDoc, etc simply resolve the conflicts differently based on consistency of ordering, as patches may be applied out of order for it to be a CRDT.


Curious, have you compared this idea to what Darcs does (I don't know Darcs well enough to do justice to it, but it sounds related).


An example of what I'm thinking about, which I don't think Darcs can do (I'd love to be wrong):

Alice and Bob both branch off of master at the same point. In Alice's branch, she moves function `foo` into a different module/file. In Bob's branch, he changes `foo` to handle a new condition. Both wish to merge into master.

Whoever merges later is going to have a merge conflict, and have to resolve it manually, using their human understanding of the semantics of both changes. It's clear to me how that conflict should likely be resolved, but as long as those changes are presented as text diffs, I don't expect my VCS to be smart enough to figure that out on its own.

It would be interesting to explore other ways of representing changes, such that a computer would understand how to compose them in more situations like this.

You can quickly come up with examples of changes which conflict in a way that should probably always require human intervention: Say Alice and Bob each wish to assign the same constant to different values.

So, I don't expect that you could completely remove the need for developers to manually resolve tricky conflicts. At least, not without completely changing how we express changes to programs, which may well be a non-starter for practical purposes.


There is a product called semanticmerge that does this.


neat! thanks


I'm unfamiliar with Darcs, but thanks for calling it to my attention. Based on a quick look, it appears Darcs uses text diffs, so it's not quite what I'm talking about, but it's definitely interesting.


This article is from 2015, and as a result, that is the last reported year.

I found a more up-to-date data set at http://federal-budget.insidegov.com/ which is consistent with the reported years, and includes both earlier years and an estimate for 2016. Unfortunately, it seems the deficit is growing again, with an estimate of -$616 billion for 2016.

It's also worth looking at earlier years -- nothing before 2009 was higher than $464 billion. Looking at a longer timeline, it seems clear that the general trend is towards a bigger deficit, but the impact of the subprime mortgage crisis means we can play games with statistics by framing our timeline around that year.

This is similar to the trick used by Ted Cruz to argue against climate change, by insisting on presenting the data on an 18-year timeline, so that it would start in 1997 and include that year's abnormally high temperatures caused by an El Niño weather pattern.

http://www.cbsnews.com/news/fact-check-ted-cruzs-claims-abou...


I think one of the big problems with cities is how static their layouts can be. Many years from now, I like to imagine that cities will be made of movable parts that can be rearranged throughout the day to take match the changing needs of the people.

For instance -- there's no need for me to sleep in an expensive part of town, but there's also no need for me to be awake while my bedroom is shifted out into the outskirts. This would be something like Bruce Willis' character's apartment in The Fifth Element, but I don't think we have to wait until we're building cities in space to start working on this.

Similarly, if there's a diner that closes after 3PM, why should I ever have to walk past its empty storefront later in the day? The popularity of food trucks is evidence that there is value in being able to move supply around to find the demand and that we can reorganize our cities on a short time-scale. We don't even need to wait for the parts to be self-driving, as nice as that would be.


Jay Kreps, architect of Kafka, calls it a log.

https://engineering.linkedin.com/distributed-systems/log-wha...


Here's an O(N) solution to the `findSum` problem in Ruby:

  def findSum(array1, array2, sum)
    complements = array1.map{|x| sum - x}.reverse

    i = complements.count - 1
    j = array2.count - 1

    while i >= 0 && j >= 0
      return true if complements[i] == array2[j]

      if complements[i] > array2[j]
        i -= 1
      else
        j -= 1
      end
    end

    false
  end


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: