It would be nice if 7-zip added support for the new Windows 11 context menu. It requires that each context item is registered to an application and gets removed when the app is uninstalled, instead of the current free-for-all. [1] There's a fork, NanaZip, that adds this and a few more features, although the way it's structured makes it a non-trivial patch. [2]
> Due to the issues in Desktop Bridge file system virtualization, you are unable to use NanaZip in the Safe Mode of Windows.
> Due to the policy from Microsoft Store, NanaZip is unable to disable Desktop Bridge file system virtualization, so the file operations in %UserProfile%/AppData will be redirected in Windows 10, and file operations in directories other than Local, LocalLow and Roaming in %UserProfile%/AppData will still be redirected in Windows 11.
Due to the policy from Microsoft Store, ... file operations in directories other than Local, LocalLow and Roaming in %UserProfile%/AppData will still be redirected in Windows 11.
Guessing here - the new installer format msix which store apps also use kicking in a filesystem virtualization so even though some garbo app thinks it's writing to `c:/windows/system32/save_preferences_here_why_not` that's still actually virtualized to be under c:/users/<example>/Appdata/...something
It's possible to unclutter the normal menu, and the nested menu always requires extra clicks, and additionally - all the options I ever use are always in that new submenu.
The whole argument is bogus anyhow - if win11 wanted to avoid orphaned context menu registrations, then it makes sense to tie those to installed programs. However, that has nothing to do what's nested and what's not.
Yes, windows should likely make it easier to declutter the contextmenu, and yes, that may require a new api. But the new interface is still a really poor choice even assuming those are givens.
No it doesn't; it comes from _my_ judgement. The fact that you actually use those items and appreciate hiding the other ones behind an extra click is fine, but it's purely a usability regression for me. If I could exactly reverse the distribution between what's nested and what's not, I'd be happier.
I'm curious which options in the new context menu you actually use most often? Clearly your usage is quite different from mine.
Why does Microsoft claim they take backward compatibility so seriously, while at the same time requiring programs to make changes like this to keep existing functionality working on new OS versions?
Programs work, but their functionality is hidden under extra menu item. This is actually a good change. The old one is very cluttered by exact programs you are talking about.
It will be less cluttered because the old system let apps mix their commands with system commands, messing with your muscle memory, while the new system has a dedicated API that oversees everything to group extensions to their own place below system commands.
This is the whole point to why Microsoft took this awkward step.
In my experience so far, I _exclusively_ use the options in the new submenu. It's purely a usability regression so far.
You can make the case for a new API to avoid orphaning, and for greater traceability - and also, hopefully, for better performance. But I sure hope the end result will be more usable not less, and I'm pretty disappointed that this intermediate step is such a step backwards.
Well, 7-zip, subject of this thread, does it (that one is useful though). Git adds "Git bash here" and "Git GUI here", which I never use. VLC adds 2 items, which I never use. There's useless for me stuff from Windows itself.
I have the same entries, as well as "Edit with Notepad++" in the menu of every file. I'm torn about that one. I use it, but mostly only because the "open with" menu is too cluttered as well.
There are some strange builtin entries from Windows. Like why do I have the option to "play" a jpeg on any Bluetooth headsets that are paired with my laptop? I just found out that besides "open with" and "send to" there is a third way to open a file using "share" to share a file with an app.
The most useless entry is probably from AMD Radeon software. Two entries on the very top of the context menu on every folder. I would understand if it appeared in the menu of the Desktop folder, but not on every single folder no matter how deep in the file structure.
Unarchiver has been one of those “install right away” utilities on my Macs for many years. Like you say, great UX and it’s extremely rare for it to not handle archives properly, even those in old or esoteric formats.
Ark [0] works well enough for me when I feel like using the GUI - mostly use it from the context menu in Dolphin, in particlar the "Extract and create subdirectory if needed" that provides a consistent experience for Windows-style archives where root-directory is usually filled with stuff (Ark will create a subdirectory based on the archive name) and Linux-style archives where the convention is to have everything inside one directory in the archive (Ark will not create an additional subdirectory if there is just one file or directory in the archive root).
I'd like this as well. As a workaround for now, you can right-click drag the icon elsewhere in the same directory/desktop. This will pop up the legacy context menu with 7-Zip available for use.
Yes you can click "Show More Options" as well, but I find the drag operation much faster once you get the habit down.
That sounds like the right menu. Maybe you just have 7-Zip configured differently than me. I have Extract and Add to... enabled as my context menu options to keep things cleaner.
No, that's the one, what I mean is that this isn't the old context menu but the "move here" mini context menu. It doesn't have everything like rename, properties, etc. so it's not a workaround for "show more options" for things that aren't 7-zip.
"Some reason" here is: backwards compatibility. Good old Windows backward compatibility. If you've got habits or automations based on the Menu key (or its even older shortcut: Shift+F10), Microsoft seems hesitant to break them, even if they felt they could change the default behavior of mouse users.
"7-zip now also has an official Linux version (`7zz`), which can be used to replace the unmaintained p7zip (`7zr`, `7za`, `7z`) ..."
That's interesting, thanks.
From time to time customers ask about adding "7zip" to the environment at rsync.net so that they can do things like:
ssh user@rsync.net 7z blah blah ...
... and an official linux version would be a step closer to a solution there.
I see that source is available so we should be able to compile it for FreeBSD without too much trouble ... I'll write up something in next quarters tech notes about it ...
Thank you for pointing this out! This is the source of much confusion. Although Arch for example uses https://github.com/jinfeihan57/p7zip which seems to be reasonably maintained?
Very tangential: I was zipping up an executable for a Windows-using friend just yesterday, and the exe had a non-ASCII filename. I first zipped it with macOS command line zip(1). Surprised to learn that it unzipped into a gibberish filename on Windows. I then tried to “Compress to ZIP file” from Windows 11 File Explorer, and it wouldn’t even let me do that: “… cannot be compressed because it includes characters that cannot be used in a compressed folder…” How the hell is that acceptable in 2022?
There is a bit in the ZIP that's supposed to tell you the path and comment are encoded in UTF8. It's bit 11 of the general purpose flag. Despite being there since 2006 it's poorly supported by Windows and Mac. I believe Mac writes the path in UTF8 but doesn't set the bit.
Windows is full of forbidden path characters, and it always will be until they abolish some forty years of backwards compatibility (the CON, allowed characters etc) and expose their quite modern file system for what it is, abolishing fucking drive letters once and for all.
In this case these are not forbidden path characters, just non-Latin characters without any special meaning. At some point you should say "fuck Latin-1, this looks like Unicode so we're decoding as Unicode", not "don't use the Latin alphabet? You don't deserve to use a computer".
Not being able to use perfectly normal strings like aux or nul as filenames is of course an entirely different category of madness.
It is not tangential at all. The main reason I use 7z over built-in zip is because the former handles Unicode properly instead of the compression ratio difference (which is minor in daily use cases).
> How the hell is that acceptable in 2022?
Because Microsoft is hell bent on backwards compatibility.
Interesting seeing a new release shortly after 21.07 , since 21.07 has been in development for years? (afaik), until it finally came out. Does anyone know if somebody took over maintenance or they switched their internal processes to speed up development? or is it just that 21.07 has been such a huge chunk of work that it simply took long, and now we could expect more frequent releases?
in any case, kudos to the continued development, this is one of my favorite pieces of software, no web bloat, just a very solid piece of engineering of a native application.
I'd very much like to see more of these, instead of turning nowadays everything into a web app, I really find them unbearable most often, to the point that I even choose to use native software that's no longer maintained, if there's no other alternative to an electron/web resource hog, which are as far from snappiness as possible.
I'm not sure there's a "they" here; IIRC this is and always has been one man project by Igor Pavlov, who incidentally didn't just write the software, but also created the LZMA algorithm which has had an absurdly long run as one of the world-leading general purpose compression algorithms.
Looking back at this version history [1], there was perviously a gap of 3.5 years between releases (2011-2014). Skimming through the discussion forums, the 2.5yr gap between 19.00 and 21.07 was filled with a number of alpha and beta releases, eg 20.02 [2]. Version numbering seems to follow a consistent yy.## format since 2015.
I assume stupidity is to blame for SourceForge's current unpopularity. If they hadn't started bundling spyware with downloads and responded to GitHub's better features, their huge repertoire of legacy projects would have kept them relevant.
That and GitHub's UI was so much cleaner and professional looking. Even without the bundleware fiasco, the website was dated, and a maze to navigate. GitHub looked like the cool new kid on the block. I'm sure it would have been popular either way.
>infinite money glitch counterfeit naked short selling payment for order flow ponzi scheming financial terrorists
Oh god I can't even escape from meme stock cultists on a hacker news story about 7zip? You guys are even more annoying than cryptobros, and that's saying something.
The cultural zeitgeist has reached another level of loopy, particularly since twenty-twenty. Doing my best to find humor in the absurdity, of a world a bit broken.
I saw notes about improved support for tar which had my hopes up but it doesn’t look like any changes were made to make handling of compressed tar files easier. For both compressing and decompressing it’s a two step process with 7zip which is pretty frustrating. On Mac I use Keka which makes it really simple. On Windows using 7zip first you have to compress to tar then compress to gzip or other, same with decompressing to first extract the tar and then unpack the tar. Otherwise it’s fantastic for my use cases.
yes that has always bugged me. it seems like in the rare cases where someone actually wants the .tar on the desktop, there could be a different "extract tar" or "extract inner" for that.
I open an archive and double click an exe that's bundled with some DLLs.
What does WinRAR do? Unpack all the files and open the exe, everything works fine.
What does 7zip do? It just unpacks that single exe that then fails to run cause the DLLs weren't extracted.
Because of the behaviour I detailed in my initial post? Wouldn't apply to ReadMe's though, it doesn't have to be that dumb. At least one other poster agrees with me in the comments too.
Definitely sounds like 7zip takes security more seriously -- I would definitely want my archiving software to only handle archiving, not starting programs for me.
Yes! This has been a major annoyance with 7zip - and not only it doesn't extract all files (this isn't just for EXE files but a lot of files tend to have dependencies on other files - like HTML files depending on images or other HTML files) but it also has a race condition where 7zip and whichever program is associated with the extension of the file you're trying to open race to see if 7zip will delete the "temporary" file first or the program will open the file before 7zip manages to delete it.
7zip is good for extracting archives somewhere and for supporting a ton of archive formats (and especially for supporting disk image formats!) but as an archiving tool its features are barely barebones.
Which is basically why on Windows i tend to have both installed - WinRAR for being an actually good (and very fast) archiving tool and 7zip for handling the archives WinRAR cannot handle.
(though nowadays i handle most archives via Total Commander - which i use even on Linux via Wine :-P - which asks you if you want to either extract just a single file or all files in the archive and is IMO the best approach anyway)
Can't find the link to the source anywhere on the sourceforge page, I did find a github repo with claimed sources and a link to the sourceforge page, but that had code many years old.
Anyone know if this is open source, and where the sources are supposed to be?
It might be "open source" technically but really it's source available. They just dump some source code every release. No commit history. No documentation. And although I haven't tried it, almost certainly you can't build it without reverse engineering the code and finding that it lacks either some build files, dependencies or whatever.
I already clarified that technically it's probably open source
"Probably" because some projects like this don't actually include all parts of the source. FreeFileSync for example which claimed to be opensource, and they distributed some source, did in fact lack many parts, not just build files but actual source files (I think it can be built nowadays however).
The big problem here isn't that you don't have a commit history. It's that these programs are often unbuildable. And often it seems that they are purposely distributed this way to deter other people from building or forking your software.
Again however, I haven't actually checked the source files provided by 7zip in particular, but this problem I describe is a very common pattern in software whose source is distributed this way.
Nice to see it finally has an option to propagate Mark-of-the-Web (is it on by default?), it has been like 10 years since it was reported as a bug and was a possible hole in security for any organization that used it
Can anyone recommend an archive format that is better than zip and tar? I don't like tar because I can't list or extract from the archive without decompressing everything , zip is problematic on some systems due to legacy crap, and I want strong encryption and good compression by default that also won't be screwed up by legacy systems. Rather use a brand new format that can't be accidentally screwed up on some systems by some clients
tar.gz requires decompressing everything (in-memory at least) in sequence because the compression is separate from the archive format. It's a tar archive compressed entirely in a gzip container, whereas (e.g.) zip has a file table that can be accessed without decompression.
Sometimes I want to output specific files from a zip file to stdout.
These programs can list the files in the archive, but I have never found an easy way to extract specific files without having to type paths on the command line, or use a filelist (something like tar xf 1.tar -T list which zip utilities generally cannot do).
Until I am advised of something better, a quick and dirty solution:
usage: 1.sh 1.zip # displays list of files with line numbers
1.sh 1.zip 5p # extracts the file listed on line 5 to stdout
1.sh 1.zip 5,6p # extracts the files listed on lines 5 and 6 to stdout
1.sh 1.zip 5p\;8p # extracts the files listed on lines 5 and 8 to stdout
1.sh 1.zip /src/ # extracts the files with "src" in their path to stdout
case $# in :)
;;1)exec 7z l $1|sed -n '/^---/,/^---/{/^---/!p;}'|cat -n
;;2)
y=$1
shift
x=$(7z l $y|sed -n "/^---/,/^---/{/^---/d;s/.* //p;}"|sed -n "$@")
7z x -so $y $x
exec echo
;;0|*)exec echo usage: $0 zip-file \[sed cmd\]
esac;
Would be nice to know if or how 7-zip plans to address this. Patch management tools are going to call this a vulnerability until there is some resolution.
When I last installed 7-zip, I had a merry time checking I had a genuine copy looking at SHA256 values from various sketchy websites as the installer wasn't signed. I persisted as 7-zip was just too darn useful but the experience puts me off upgrading.
Yeah, I'm not sure why 7-zip is still unsigned. Code signing certificates are cheaper now and it's much easier to check that than random hashes, not to mention reputation trusting by AVs and such.
The entitlement to demand that someone providing free software must pay money to Microsoft and its partners for the priviledge...
Fuck that. 7-zip.org has official download links - you can expect those to be the official binaries from 7-zip.org - and a code sigining certificate won't really tell you more than that.
If you want Microsoft to gatekeep your software then please complain to Microsoft if that inconveniences you, not to anyone else.
Demand? Entitlement? Suggesting a QoL improvement is “demanding”? And I’d happily donate to make it happen but there’s no donation link or option anywhere.
> 7-zip.org has official download links - you can expect those to be the official binaries from 7-zip.org - and a code sigining certificate won't really tell you more than that.
Have you heard of compromised download servers? At least signing it on your machine before distribution means you can guarantee it hasn’t changed. (And yes, you can just change the hash on the website, it’s the same server)
Linux Foundation unveils Sigstore — a Let's Encrypt for code signing [1] [2]
> The Linux Foundation, Red Hat, Google, and Purdue have unveiled the free 'sigstore' service that lets developers code-sign and verify open source software to prevent supply-chain attacks.
> As demonstrated by the recent dependency confusion attacks and malicious typo-squatted NPM packages, the open-source ecosystem is commonly targeted for supply-chain attacks.
> To pull these attacks off, threat actors will create malicious open-source packages and upload them to public repositories using names similar to popular legitimate packages. If a developer mistakenly includes the malicious package in their own project, malicious code will automatically be executed when the project is built.
> To prevent these types of attacks, 'sigstore' will be a free-to-use non-profit software signing service that allows developers to sign open-source software and verify their authenticity.
> "You can think of it like Let’s Encrypt for Code Signing. Just like how Let’s Encrypt provides free certificates and automation tooling for HTTPS, sigstore provides free certificates and tooling to automate and verify signatures of source code."
Well, relatively cheap. You have some Comodo/Sectigo/DigiCert resellers selling for $70-$100/year.
And it won't instantly skip SmartScreen (that requires EV which is $300+) but it helps to establish reputation, i.e.: if you consistently release safe software signed with a cert named X, then SmartScreen and co. learn to give a good starting reputation to binaries signed by X.
Given that not EVERYBODY does EVERYTHING on Linux in the CLI, can anyone recommend a GUI archive handler like engrampa, but that is not bound to p7zip and will instead use 7zz?
It has had supported tar archives as long as I can remember.
It's not had the best support for them, though, since it wouldn't always let you access the list of files in an archived tar. You often had to extract the tar to do that.
Did they fix issue from 2005 when unpacking 800GB files its unpack first to c: %temp% that partition for OS has only 20GB, rather than to folder that was in config file or i have use still RAR ROTFL open source LOL
Are you trying to drag and drop the files to the destination in a file explorer window? Because when I use "extract" and point the destination folder I don't think it does what you say.
I use 7zip a decent amount. My only nit with it is its tendency to create badly fragmented files on compress. For large files you can be looking at 1000+ fragments. Been meaning to go thru the code and find where it is writing out the compress and add a bit of a buffer before write. That should let most filesystems allocate better. It may be also closing/opening the write file too much. But I have not dug into it.
Microsoft needs to provide reasonable means for open source developer to sign their binaries on windows. Having to pay for the priviledge of being allowed to provide free software is not reasonable.
Microsoft provides the Microsoft Store which when you distribute apps through it Microsoft can handle all the binary signing. The price of that has dropped down to $19/year for "individual developers" [1]. That seems pretty reasonable to me, I'd hope most open source developers could get at least get a Github sponsorship for $20 every 12 months. Rumors are that if you are an open source developer with a well known app you can get the "developer account" for free if you find the right channel to ask, they do seem to want more "official" distributions of open source on the Store.
Obviously, that's not the answer a lot of open source developers want because of mistrust in the Store and Store distribution, but after years of all the complaints of the CA cartels and people asking for an alternative, Microsoft built an alternative (and no one came, lol).
Having to pay for the privilege to provide any software is not reasonable, Microsoft is just giving the certificate cartels unjustified power (which wouldn't be solved if Microsoft themselves signed the software since that'd still be giving Microsoft said unjustified power).
The author should host the software in a more respectful place like GitLab.
Sourceforge's inclusion of spyware and malware in downloads, and the current owner's willful ignorance of racism and hate-speech in other publications (Slashdot) is not a sign of a serious company offering a serious service.
I have no idea what the issue is with Slashdot regarding racism and hate-speech, but I do find it interesting that you want the current owner held responsible for it and also for the prior owners actions at Sourceforge which the current owner stopped over 6 years ago upon gaining ownership of Sourceforge.
You have clearly never read Slashdot, or you would know that the comments section has always been filled with racism and similar trash talk, much more recently than the 6 years you mention. They also never act on those reported comments, which I know because I reported hateful comments daily. But please explain to us why we should hold the owner company in high regards.
I never said they should be held in high regard. I said it was interesting that you wanted to judge them by both their current actions at one site but by a previous owners actions at another. I also find it interesting that in your final sentence you used plural pronouns as though there were a larger group than just yourself behind your comment.
Of course, I am fully confident that other readers are with me and share the opinion that the current owner's disregard for racist talk on Slashdot is questionable.
I have not used it since 2019, but at that point it was still rife with hate speech directed towards the chinese, the russian, and generally black and brown people.
And I do not have a curated journal of this ready for your inspection, you simply have to trawl the comments section yourself. That way you don't have to take my word for it.
Then maybe you should provide that context: that this is simply your own opinion based on observing user comments, rather than an objectively established fact.
How do you determine that comment curation that fails your own standards is "willful ignorance" on behalf of the owner?
I would say that language involving derogatory names for black people, arabs, russians, and asians, would always be objective established as racism. I don't know why anyone would disagree with that.
And I reported hundreds of comments with language like this over the years, and whenever I followed up on it I always found the comments still present in the discussion thread.
Please add (I believe this should be trivial) all the tar's metadata-related features (I mean saving links and access rights). There should be only one, all the archive formats different from 7z have to die. TAR and ZIP are both horrible. and should be displaced by 7Z.
This is kinda naive to wish for - tar has applications that reach beyond stuffing a BLOB containing other BLOBs and some metadata into a seekable file somewhere, such as archiving data to magnetic tape/LTO.
And even if you develop the perfect archive format that happens to nail every possible use-case 100% (which you won't, because there just are too many), you will STILL have to deal with compressed and archived artifacts which accumulated over the last six or so decades in various places.
That's a parallel universe far far away, ~99% of people having to use tar (just because it's the standard and the only format supporting links/access metadata) every day will never have to use a tape drive. I don't see a reason why we can't use 2 separate formats - one for everyday packaging, a different one for tape drives.
> And even if you develop the perfect archive format
There is a perfect format already - it's 7z. Just add file access rights and links support to it. No need to invent anything really new.
Not really. Adding support for a particular kind of metadata is a change so minuscule it doesn't qualify as a change in the format. Apple just stores their filesystem metadata as in a special sub-directory in ZIP files. And the only problem with the Apple's solution is nobody else respects it. 7zip is a format developed and maintained by a specific author who is alive and active so he can just build the same in the standard 7zip implementations and chances are everybody will accept.
By the way I have just found an actual imperfection in 7zip: it can't let you choose the order in which archived files are stored in it nor chose different compression parameters for specific files. This limits its applicability. E.g. the EPUB standard says the first file in an archive must be "mimetype" and it must be not compressed. But I believe this can be fixed with reasonable ease (and probably without breaking changes) as well.
7z makes the same mistake as zip: implementing both a filesystem and a compression algorithm. There's no way a single tool can implement all the bells and whistles of the various filesystems in use today. For example, skimming through the 7zip format ( https://py7zr.readthedocs.io/en/latest/archive_format.html ) I can see specific support for file attributes from (current versions of) Windows and Unix, but no AmigaOS protect bits ( http://www.jaruzel.com/amiga/amiga-os-command-reference-help... )
Keeping these two tasks separate allows swapping-out the implementation of each (e.g. I tend to use .tar.lz these days, since I'm mostly on Unix)
once you have a general file-system representation format (with all its complexities) adding compression of the blobs seems a minor addition.
On a related note I was surprised to discover that the Windows 10 backup tool was able to store about 240GB of various data in barely 80GB of backup; I believe it must have spliced most files to look for common fragment to deduplicate (with some NTFS magic behind maybe). .tar.xz will never be able to do that, if I try to compress 10 copies of the entire firefox codebase, it will never be able to recognize the duplicated files; only something like a file-system+compression could do that.
> only something like a file-system+compression could do that
borg handles this just fine. I put all kinds of stuff into borg repositories: raw MySQL/PostgreSQL data directories, tar archives (both compressed and uncompressed), or just / recursively. You can do stuff like:
$ tar -caf - / | borg create …
or even
$ borg create … </dev/sda
and your repository grows by the amount of data changed since last backup (or by a couple of kilobytes if nothing has changed).
> once you have a general file-system representation format (with all its complexities) adding compression of the blobs seems a minor addition
That's a vacuous statement, since there will never be a "general file-system representation format"; that's my point. Even if someone collected together all the features of every filesystem ever developed, that would still ignore those which haven't been invented yet.
Further, it requires a choice of which compression algorithm? What about those that haven't been invented yet?
These problems only arise if we want to define "one true archiver+compressor". If we keep these concerns separate, there's no problem: we choose/create a format for our data, and choose/create a compressor appropriate to our requirements (speed, size, ratio, availability, etc.)
> .tar.xz will never be able to do that, if I try to compress 10 copies of the entire firefox codebase, it will never be able to recognize the duplicated files; only something like a file-system+compression could do that
This seems to miss my point, in several ways:
Firstly, xz has a relatively small dictionary, so your use-case would benefit from an algorithm which detects long-range patterns. Choosing a different compression algorithm for a .tar file is trivial, since it's a separate step; whereas formats like 7zip, zip, etc. lock us in to a meagre handful of hard-coded algorithms. That's the point I'm trying to make.
Secondly, .tar is designed for storing what it's given "as is": giving it hardlinked copies of the Firefox source will produce an archive with one copy and some links, as expected; giving it multiple separate copies will produce an archive with multiple copies, as expected. That's not appropriate for your use-case, so you would benefit from a different archive format that performs deduplication. Again, you're only free to do this if you don't conflate archiving with compression!
In your case, it looks like a .wim.lrzip file would be the best combination: deduplicating files where possible, and compressing any long-range redundancies that remain. This should give better compression, and scale to larger sizes, than either .tar.xz or .7z
(Note that WIM seems to also make the mistake of hard-coding a handful of compression algorithms, so you'd want to ignore that option and use its raw, uncompressed mode. My brief Googling didn't find any alternative formats which avoid such hard-coding :( )
I might have made stronger statements than warranted...
What I was trying to say is that .tar IS a filesystem description format; used to convert the filesystem into a stream that is then compressed separately.
[1] https://blogs.windows.com/windowsdeveloper/2021/07/19/extend... [2] https://github.com/M2Team/NanaZip