Hacker News new | past | comments | ask | show | jobs | submit | smusamashah's comments login

I keep a long list of browser based and CLI p2p file transfer tools[1].

LimeWire (which now has its crypto currency probably) has been on a rampage recently aquiring some of really good tools including ShareDrop and SnapDrop. https://pairdrop.net/ is the last one standing so far.

[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...


Probably you can also add Dragit[1], which is a desktop p2p file sharing tool for local network with automatic host discovery. Currently supporting Linux and Windows. (author & maintainer here) I'm not sure if I should keep on working on the tool, considering the length of the list so far. :D

[1]: https://github.com/sireliah/dragit


You should add sendme, it's one of the best I've tried for the CLI. https://www.iroh.computer/sendme

The Galene videoconferencing system <https://galene.org> includes peer-to-peer file transfer. The web client is built in, but there's also a command-line client at <https://github.com/jech/galene-file-transfer>.

https://wormhole.app/ has been spared and is pretty good. Encrypted, dl can start before up is finished, decent size limit.

Unrelated to the wormhole python cli tool and associated file sharing protocol.


Both not to be confused with Web Wormhole, which is also pretty good: https://webwormhole.io/

Slightly different approach – "P2P first", i.e. both sender and receiver need to be online simultaneously throughout the transfer, but there's a TURN relay, so it should work in almost all network constellations.


It's not a p2p tool btw.

It can be, if you send files larger than 5GB. https://wormhole.app/faq

Yeah, it uses webtorrent behind the scenes: https://webtorrent.io

...which is freaking awesome torrent software, by the way.


Was very annoyed when they acquired ShareDrop.

Not sure if it's p2p, but I also find that this exist: http://xkcd949.com


Bless you.


Other than cuts every 2-3 seconds to keep engaged in Cocomelon, also notice that CAMERA NEVER STOPS, ever. Every scene you see it's moving. It could be a slight movement or exaggerated one, but it's never stationary.

The 2000's action movie / tv show school of cinematography, starting with the Bourne films. https://tvtropes.org/pmwiki/pmwiki.php/Main/JitterCam

There was another website which let you find movie quotes. You search for quote and it brought up small clip where characters are saying that thing you searched for. Had very similar UI, can't find it. Edit: probably https://clip.cafe/ there are few others too

There is another one which is like a Netflix of subplots extracted from movies. I remember it had lots from Rick and Morty. Because Rick and morty keeps bringing up random subplots never to discuss ever again.

There is another website which is a gif database of awesome shots from movies etc. It's like a reference database of camera work.

Will list when I find these urls.


  another website which is a gif database of awesome shots from movies etc.
  It's like a reference database of camera work.
Is it https://eyecannndy.com/ ? Or a different one ?

Yes, exactly this one. Thanks.

getyarn.io is the clip site I've seen. https://nestflix.fun/ is for movies within movies, not sure if that's what you're referring to by subplots thought.

This https://nestflix.fun/ was the one I was thinking of in terms of subplots. Thanks.

EDIT: its on github, you can contribute more plots raising new issues here https://github.com/lynnandtonic/nestflix.fun


you might be thinking of seinquotes -> click the image to get a video gif of the quote https://seinquotes.com/?search=hold+the+reservation

When I think about LLMs reaching their peak at writing code, I can't help but think they will be writing hyper optimized code that will squeeze every last bit of processing power available to them.

I use these tools to get help here and there with tiny code snippets. So far I have not been suggested anything finely optimised. I guess it's because a greater chunk they were trained on isn't optimised for performance.

Does anyone know if any current LLMs can generate super optimised code (even assembly language) ? I don't think so. Doesn't feel we are going to have more intelligent machines than us in future if they full of slop.


Nope, tried various models (you know the common stuff, Claude 3.7, o1, R1, stuff like that) to write SIMD code - both as c++ intrinsics and .NET/c# vector intrinsics - the results have been really really subpar.

The voices with Chinese origin when generated as English samples do sound like a Chinese person speaking English. It is very interesting.

This is a low tech replication of black mirror episode where a guy and girl fell in love with each other after first date. Realised something is wrong with the world and tried to exit the simulation. In the real world, those two were decided by AI as a match because these two passed the challenge together in simulation.

It was this very tool. It could be sshed into with

ssh vtm@netxs.online

That domain is dead now


Hooray, thank you!

When I said "zooming" I was thinking of the white tethers attached to each window which would pull them back into a centre bundle. You can see what I mean here: https://changelog.com/news/a-textbased-desktop-environment-i... at the bottom left, the lines going off to a single point.

actually, by zooming out, I can still see the tethers on the windows. The ssh version was quite mind-blowing back when...


This is the same thing as the public demo back then via 'ssh vtm@netxs.online'. There are no public demo servers running now, but you can still ssh to your running vtm instance with any number of connections. In case of using MS Windows you can even get the output in a standalone GUI window via 'vtm ssh user@unixserver vtm'.

am I imagining it, or were there more features in the original public demo?

There were demo apps in the menu, they are still there: 'vtm --run text', 'vtm --run calc', 'vtm --run test', 'vtm --run truecolor'... You can play with it by running them directly inside the vtm desktop, by typing (or right click to paste) commands like vtm.desktop.Run({ type='calc' }) in the 'Log Monitor' command line.

Now vtm is the same as it was running on demo servers. In the modern version, the fading effects and window shadows are disabled by default in the settings. Perhaps the fading effects were removed as unnecessary.

I feel very doubtful on usefulness of these tools because of hallucinations. How reliable is this one in comparison with others like these? How well does it cite the source?

To me getting my data from my notes correctly is most important. I use AI tools for coding occasionally (which I can easily verify on my own), for anything else I can never bring myself to be doubtless about the output.


> How well does it cite the source?

I don't know about the OP tool, but open webui has its own document database which you can integrate with LLMs, and when answering questions it always cites the source with a link for you to verify


Revolut OTOH is a bank https://www.revolut.com/

Black color coming from projector isn't full black either https://www.projectorreviews.com/articles-guides/projector-b...

He also did not do (or at least did not mention) anything about the reflected/diffused and ambient light behind and around the TV screen, which would negatively impact contrast.

Aw, give him a break. Neither do OLED tv reviews, but I do not see everyone piling on them "well, technically the contrast isn't inf because the screen is scattering the ambient light"

Have you seen those samsung qdoleds? In a normally lit room they look more washed out than a VA panel, yet somehow still having an "inf" contrast...


No. It's not literally infinite contrast.

On the other hand, there's very little light spill from DLP--- and what little there is, is attenuated by a really big factor by the LCD.

I'd guess the limiting factor is light scattered elsewhere in the room and by the front surface of the display.


True, but for this purpose, it's "blacker" than a typical LED backlit TV. For a Youtube / DIY project ... I think the description is fine.

Good point, I hadn't even thought of this.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: