PowerShell 4.0, shipped with Windows 8 and above, has a pretty simple Get-FileHash command built in, so you don't need a separate download on those platforms. (On Windows 7 you still need either the download you link to, or a manual upgrade to PowerShell 4.0.)
I use HashTab. It makes a tab in the file properties dialog that will automatically compute several hashes at once. It even has copy-paste-verify functionality.
The author's other project, https://github.com/btrask/stronglink looks fairly interesting too - 'A searchable, syncable, content-addressable notetaking system'.
Thanks! Right now we're sort of in a chicken-and-egg situation where everything uses (mutable) URLs, so knowing the hash of a file isn't very useful. The Hash Archive is part of my strategy for promoting content addressing, to hopefully raise demand for systems like StrongLink (and others).
The more people pushing for awareness the better! Hopefully with other work like IPFS (which you do reference), use of this approach for /more/ consumer systems is around the corner. :)
It says in the About section on the home page "Unless someone can intercept your local traffic and our traffic to a site, you'll be able to spot MITM attacks". I'd argue that this is not entirely true. If an attacker operating as a MITM can intercept all local traffic (e.g. via some form of DNS attack), they do not need to control the traffic from hash-archive.org to 3rd party sites. They simply need to control how hash-archive.org is presented to the victim. In theory, the attacker could serve up a bogus version of hash-archive.org that appears to be legitimate but is returning falsified hashes that match the malicious downloads they have intercepted elsewhere.
You might claim this is not possible because hash-archive.org runs over HTTPS so an attacker would also have to somehow generate a valid SSL certificate signed by a trusted CA. This is true but if someone types hash-archive.org into their browser URL bar, the initial request is made over HTTP. The legitimate hash-archive.org redirects the client to HTTPS seamlessly but a fraudulent hash-archive.org could just keep the victim on HTTP.
To provide some mitigation against this type of attack, you could do a couple things:
* Only allow hash-archive.org to be accessed over HTTPS (port 443). Close port 80. [EDIT: in fact, this doesn't really help all that much because the MITM can still try serve their bogus version of hash-archive.org over HTTP]
* Set the HTTP Strict Transport Security header (HSTS) [1]. After the first visit to the legitimate hash-archive.org, compliant browsers will only ever allow future visits to be made over HTTPS.
For good measure, you could also set up HTTP Public Key Pinning (HPKP). HPKP is a 'security feature that tells a web client to associate a specific cryptographic public key with a certain web server to prevent MITM attacks with forged certificates.' [2]
Fantastic! While not as bulletproof as receiving the hash out-of-band for a critical resource, this is better than verifying against a hash received from the same origin as the resource, and far better than no hash verification at all. And because this is FOSS, we can be gain some protection against the compromise or MITM of a single, central hash-archive server when many of them are deployed by distinct entities on different public domains.
One request: there are lots of users who would be well-served by a way to compute hashes in-browser via the WebCryptoAPI [1]. Would you consider accepting this feature into hash-archive? For users who aren't able to install or have difficulty using a hash calculator locally, this would enable verification of downloaded files in a one-stop online workflow.
I was trying to do something exactly like this a few months ago, but I found that browsers' extension APIs have no way to access files after download. You'd have to intercept a download and manage it entirely in the extension, which will likely break so many JS-driven downloads as to be very bothersome.
Alternatively you would have to install a local helper process outside the browser, and at that point, you're basically an antivirus. In fact, I think AVs are the ones better placed to add this as a feature, as well as already having the significant resources needed to maintain a secure archive of hashes for files (although I had thought up a signature-based scheme that software vendors/distributors could adopt for a small fee).
There are some ways of doing it, depending on exactly what your threat model is, but I think it's risky in general. Right now this is just in the planning stages, but I want to have submitting URLs be a manual button click for each download, and also provide an option to use a local copy of the database (so that all lookups would be completely private).
It would be nice if there was an easy way to copy the hashes for example to diff it against what you computed. As it is now, the page is laid out so that it is hard to copy just the hash.
https://technet.microsoft.com/en-us/library/dn520872.aspx
I believe the syntax is just