rq is super great, it's been indespsible for me while working with CBOR. It's also way more useful than jq if you want to filter down your data. Some examples:
I'm the author of rq, and it's great to hear about your use-cases!
Feel free to submit a feature request about the raw string use-case, I've been thinking about it for a while but some people have wanted me to add line support (maybe -l and -L?) and some people have wanted raw text support (maybe -r and -R?) while other people still have been talking about TSV/CSV support (not sure how that would look tbh...). I'm not sure what to do exactly, so feel free to tell me!
Aw dang, I wish I knew about this earlier. May have made teaching command-line data processing to beginners quite a bit easier. Example lesson using Spotify API (which is still public and free):
I personally like using jq but not sure how easy-to-grok it is for people who are new to both the command-line and to serialized data structures in general. Will have give rq a spin (waiting for it to compile on my machine) but it looks quite nice, at least for my own uses, which heavily involve YAML.
csvkit is my go-to tool for data, so much that I rarely ever use SQL for anything other than when I actually need a database. I suppose if rq were to be my go-to data-processing tool, I'd probably do this:
csvjson mydata.csv | rq ...
Unless I'm missing it in the documents, there doesn't appear to be a way to convert to CSV? That would be helpful. FWIW, csvkit has a tool named in2csv, which will read from line-delimited JSON:
I don't have a Mac computer so it's hard for me to offer good Mac support. I'd really appreciate feedback on these kinds of issues so that I can fix them!
I _love_ jq. It's been an incredibly useful tool for me since I discovered it ~6 months ago. However, article mentions "jq -n"; I personally find jq syntax less appealing when it comes to generating JSON instead of parsing it. For that particular task, I prefer using "jo":
I won't throw shoes. However, I do prefer anything that forces a commands output into text. Which JSON at least gets that right.
Specifically, if I have something that output objects, it is possible (and likely) that I can not persist that output to less and explore what is going on. Great if I already know exactly what I want. Terrible if I just want a visual exploration. Worse, if I can't dump it to file, I may have to constantly do some expensive operations just to find out what will give me the answer I want.
Come on now, you're missing the point. If the lingua franca of your shell is objects instead of text, you can import from whatever format you want and work with it in high fidelity:
If the result is 0 it is well formed and 1 otherwise. You can also pretty print JSON by opening the scratchpad in firefox and clicking the "Pretty print" button.
I would guess: a policy against installing unapproved software on production servers, systems that don't have compilers installed (for security, jq appears to be C code), or debugging on client/customer machines
If you use AWS CLI [0], you can embed JMESPath [1] queries via the --query switch, i.e. you don't ned jq. I was a loyal supporter of it, but switched to JMESPath [1], and I love its query languages more.
Azure also uses this in their new CLI [0]. The biggest advantage of JMESPath over jq is that JMESPath is a spec, and thus can be implemented in several languages [1].
I've been using jq quite a bit lately and agree that the docs would benefit from more complex examples. However, I've been able to get some fairly complex solutions done with jq. Do you have a concrete example of something you are trying to do?
I have server logs in jsonlines format in a file. My goal is to display some basic information (URL, referrer, time stamp, IP address, and user agent) for each log line with a particular header set to a specific value. Headers are in $.request.headers and are an array of key-value pair arrays.
Figuring out this use case from the jq docs or jqplay has been a major struggle for me. I feel that this use case reflects 90% of what I would want to use jq for. I feel that if I can't get over this hump just from docs, there's no way I could justify bringing this tool on board with my team.
Sounds like you're after the select() filter, which evaluates a boolean expression and passes the input to the output only if the expression is true.
So your example would be something like: select(.request.headers|map(.[0] == "X-My-Header" and .[1] == "my target value")|any)|<pick out some keys to display>
I realise though that your point wasn't the specific example, it's that the docs for these kind of cases are poor, and you aren't wrong.
I guess my main advice is to not think of jq as something like cut or sort - instead, it's more like awk or sed. You can do lots of crazy things with sed, but it's not immediately obvious how just from reading the man page. These kinds of tools require a little more time investment but are very powerful.
jq is awesome, but what stands out to me in the article is the use of curl. You can shrink those ~200 bytes of curl by 2/3rds using HTTPie(https://httpie.org/):
http put your.api.endpoint email=your@email.address password=swaggerrocks
HTTPie and jq go together like peanut butter and chocolate, except without the caloric bloat.
I like ag, but unless I'm missing something, ack has this killer flag which ag lacks:
--output=expr
Output the evaluation of expr for each line (turns off text
highlighting) If PATTERN matches more than once then a line is
output for each non-overlapping match. For more information please
see the section "Examples of --output".
Basically, it allows you to use captured groups in the output:
cat file | ag '(\w+) (\d+)' --output '$1 is $2 years old'
On the other hand, ag allows multiline matching. So I end up using ag and ack together frequently.
Agreed, it's nice to consolidate disparate functionality into a single invocation.
But when I saw mVChr's sed line, and then again the perl line, I was reminded "Oh, right, old school." And I wondered for the Nth time how much of old school has been unknowingly reinvented, and how many times.
Nevertheless, I like the new(er) tools in this thread too.
Related HN discussion, but more encompassing of things you might need to do on the command line with data: Command-line tools for data science (2013) [1]
Recently discovered jq and it has been indispensable part of my toolbox. A lot of json tasks I used to write scripts for, such as filtering and multi file concat, are now possible for a short command.
I wish jq was available from Java. We built a microservice recently that was just an aggregator of a bunch of other api calls. Jq would have been perfect.
jq is awesome. While it can do a lot of really cool processing to filter out nested details, the majority of my usage of it is:
$PROGRAM_THAT_OUTPUTS_JSON | jq .
Which simply pretty prints the JSON input with two spaces per indentation. Also, $PROGRAM_THAT_OUTPUTS_JSON is usually a script that simply outputs the clipboard (alias for "xclip -o").
My usage is generally pretty simple as well but if people are looking for a more complex example, I wrote one yesterday for putting together some test data. Turns DynamoDB responses into newline separated flat arrays:
jq is really nice and I always install it as part of my tool chain. However I don't use it very frequently because of which I can never remember the syntax which imho is not very intuitive :(
The links to the project aren't immediately obvious, so here they are:
https://stedolan.github.io/jq/ https://jqplay.org/