Hacker News new | past | comments | ask | show | jobs | submit | pxc's comments login

I came here to make the same recommendation. Just use p7zip for everything; no need to learn a bunch of different compression tools.

If you use `atool`, there is no need to use different tools either – it wraps all the different compression tools behind a single interface (`apack`, `aunpack`, `als`) and chooses the right one based on file extensions.

I'll check this out. I actually don't love p7zip's CLI.

There's also `unp`, the universal unpacker.

This is true. Many people I know have uninstalled TikTok because it is too addictive, but no one ever does that with YouTube shorts.


Wrong. YouTube shorts ruined the app for me. No way to disable them.


Not even with parental controls.

I'm kinda OK with my kids using Youtube, but I don't want them getting sucked into the infinite pool of shit that is Shorts. But I can't block it even with parental controls.


There are browser extensions that block shorts. You can also block all algorithmic suggestions. I uninstalled the YouTube app ages ago and wish I'd done it sooner.


I uninstalled the YouTube app too! Without the app, it’s easier to use YouTube more intentionally


yeah enhancer for youtube was a lifesaver

another reason i hate internet becoming predominantly app-mediated.


I have set up YT Kids account for a young relative and manually approved many channels and hundreds of videos. This is one way you can block Shorts, with the added benefit it’s no longer an infinite feed.


How feasible is YouTube Kids? Honest question.


ReVanced on Android lets you remove shorts from the app.


NewPipe and Tubular also lack the continuous video feed feature, so you can watch Shorts if someone sends you a link, but they're just like regular videos.


I have see Chinese users who state that they are gay or bisexual right on their profile page, too.


A fairly clear hierarchy emerges with enough experience, I think, but I don't know if there's explicit consensus about it of the kind that could make its way into documentation. Here are the rules of thumb, though (in a kind of priority order):

0. If you're new and on the fence about using flakes, go ahead. (If you know you don't want them, fine.)

1. Prefer declarative installation to imperative installation.

2. If a module exists, prefer using it to configure a package to just adding that package to a list of installed packages.

3. 'Native' packages are better than 'alien' packages.

3a. Packaged for Nix is better than managed externally. (I.e., prefer that programs live in the Nix store rather than Flatpak or Homebrew.)

3b. Prefer packages built from source to packages carved out of foreign binaries.

4. Prefer to just rely on Nixpkgs for things that are already in Nixpkgs; only bother with other sources of Nix code (likely distributed as 'flakes') if you know you need them.

5. Prefer smaller installation scopes to larger installation scopes— when installing a package, go with the first of these that will work: per-session (i.e., ephemeral dev env) -> per-user -> system-wide).

6. Prefer Nixlang to not-Nixlang (YAML, JSON, TOML, whatever).

7. If you're not sure, go for it.

If you follow these guidelines you'll make reasonable choices and likely have a decent time. The most important rule is #1, so once you know your OS, your task is to make sure you have at least one module system available to you. (On NixOS, that's NixOS and optionally Home Manager. On other Linux, that's Home Manager. On macOS, that's Home Manager and/or Nix-Darwin.)

After that, everything can find its natural place according to the constraints above. If you need to break or relax a rule, it'll be obvious.

Inevitably you'll end up with things installed in a handful of ways and places, but you'll know why each thing belongs where it is, and you can leave yourself a note with a '#' character anywhere that you think a reminder might be useful. :)


I ran NixOS while I attended university and don't remember any problems with this. Is it a NetworkManager issue?


It mostly goes the other way, I think. The community surveys haven't asked about NixOS desktop usage in particular. Still, I'm certain that a large majority of contributors are running NixOS on their desktops/laptops/workstations.

That said there are prolific and longstanding contributors who focus on non-NixOS and even non-Linux platforms, and corporate users are likely to be running Nix on macOS or Ubuntu (under WSL). It's not surprising that some users who don't use NixOS on laptops or desktops have still become Nixpkgs contributors or maintainers, imo.


Incredibly disgusting drink. I'm convinced Chicagoans only ever pretend to like it in order to prank unsuspecting tourists by sharing a round of shots of it.


Windows' desktop environment is much too lackluster for that. It's uniquely inconsistent (many distinct toolkits with irreconcilable look-and-feel, even in the base system), has poorly organized system configuration apps that are not very capable, takes a long time to start up so that the desktop becomes usable, is full of nasty dark patterns, suffers an infestation of ads in many versions.

Besides the many issues with the desktop itself, Windows offers piss poor filesystem performance for common developer tools, plus leaves users to contend with the complexity of a split world thanks to the (very slow) 9pfs shares used to present host filesystems to guest and vice-versa.

And then there's the many nasty and long-lived bugs, from showstopping memory leaks to data loss on the virtual disks of the guests to broken cursor tracking for GUI apps in WSLg...


> It's uniquely inconsistent (many distinct toolkits with irreconcilable look-and-feel, even in the base system)

While I agree that Windows has long since abandoned UI/UX consistency, it's not like that is unique: On desktop Linux I regularly have mixed Qt/KDE, GTK2, GTK3+/libadwaita and Electron (with every JS GUI framework being a different UI/UX experience) GUIs and dialogs. I'm sure libcosmic/iced and others will be added eventually too.


> On desktop Linux I regularly have mixed Qt/KDE, GTK2, GTK3+/libadwaita and Electron (with every JS GUI framework being a different UI/UX experience) GUIs and dialogs.

And you can choose to install GTK+, Qt, and Electron apps on Windows or macOS, too. That has no bearing on the consistency of the desktop environment itself (not on Linux or on macOSa or on Windows). That fact is simply not relevant here.

You could point to some specific distros which choose to bundle/preinstall incongruous software— those are operating systems that ship applications based on multiple, inconsistent UI toolkits. But that's neither universal to desktop Linux operating systems nor inherent in them. Many cases that do serve as examples by the definition above are still not comparable to the state of affairs on Windows— for instance, KDE distros that ship a well-integrated Firefox as their browser— are on the whole much more uniform than the Windows UI mess.


> could point to some specific distros which choose to bundle

Why does that matter if that’s not how most users do it? There is no magical dividing lines between a distribution and the user choosing to install a random collection of apps on their own.


'Desktop Linux' isn't an operating system but a family or class of operating systems. Linux distros are operating systems. If we are to make mewningful comparisons to macOS and Windows, then we must compare like to like.


But they are inherently different and not really comparable to macOS or Windows so it wouldn’t make a lot of sense.

For instance where exactly do you draw a line between which app/package/component is part of a Linux distribution and which is third party? OTH it’s more than obvious for proprietary systems.


> has poorly organized system configuration app

To be fair almost all Linux distros are as bad if not worse in this regard.

Things like YAST which are supposed to fix that are unambiguously horrible in their own right (extremely slow, crappy UX etc)


Everywhere I've worked at that had some kind of Salesforce integration, that integration seemed almost incomprehensibly complex or a source of endless problems, and often both. But I've never been (nor wanted to be) very closely involved with any such integrations.

Is Salesforce garbage? Is that just how CRM systems are? Is everybody just doing it wrong? What's the deal?


> Is that just how CRM systems are? Is everybody just doing it wrong? What's the deal?

These kinds of tools cover 80% of what you want to do out-of-the-box.

For the remaining 20% to build it correctly you need to either hire expensive consultants or hire in-house staff to build.

Nobody budgets properly for this, and it isn't in the sales pitch, and so that last 20% is built as horrible spaghetti code by the cheapest possible devs / consultants.

Even if you wanted to pay good salaries and hire people in-house how many great engineers want to be limited to programming in Apex on salesforce?


I've only been involved with such Salesforce integrations at one company, but based on that, I can give you my take. I'd be interested to hear others'.

One, the salesforce data changes all occur through APIs (ok) which various enterprise integration tools (Informatica, Mulesoft, etc) support (ok), but those tools typically dont support easy options for retrying a change to a specific row that causes an error. If you are updating 100 Accounts in a single Salesforce Bulk API call and "5" are busy/locked, you have to build a lot of custom error handling to notice and retry just those 5 rows. Its not part of most connectors or python libraries I've seen. Also, 3 of those errors might be fatal and 2 might be retriable but you have to learn which are which or blindly retry them all. In database terms, their API transactional semantics are not statement by statement ACID but row by row within an API request.

Second, no API or SOQL operations can pull back or process or update more than 50,000 rows.

Given those two things, unless the integration person is skilled about both error handling and testing, some of the object busy/contention failures only show up in production traffic with many jobs going on so a generic integration specialist doesn't know about these Salesforce-specific pitfalls and they are discovered after the integration goes live under strange production access patterns.

EDIT: a third issue is that most Salesforce developers are UI-centric in their thinking and training and don't have database or data modeling or data integration experience to draw on so the troubleshooting for data issues on their end tends to suffer.


I have been there and done that. In a complex SF org with a lot of triggers, any record update will get blocked.

The only solution is to refactor all the apex triggers processes and flows to something more orderly. Technically it is doable. Politically it is almost impossible. SF is an ERP in most companies and touches every department.


Salesforce definitely makes it easy to screw yourself with customization. It’s a very complex beast.


The regular REST api has request scoped transactions. Only the bulk api has the issues you describe. The bulk api is kind of a special thing annyway and has its quirks. The regular REST api works more like what you’d expect.


Great point. That said, the remaining problem is that regular API calls dont scale/perform well for integrations between systems involving lots of data syncing. A few records? Fine. Many? Not so much.

I would love to be wrong on this.


Yeah pushing or pulling large, according to salesforce, numbers of records is definitely harder than it should be. On the “push” side you have row lock errors and no way to disable APEX based triggers unless you’ve designed that into the code itself. On the pull side, if you’re trying to extract a large number of records and your soql query times out you’re out of luck. SF is good about creating custom indexes for you through a support case but it takes time. Even then, on the order of millions of records it’s still difficult.

My day job is implementing large SF projects. Multi-million record data migrations arent unusual. Even if the data is clean and mapped the migrations take weeks of planning. We go over every inch of the setup to make sure we have the best chance of getting a clean load on try #1. However, we schedule for 3 trial loads and verification before a “go live load” into actual production. Even after all that it’s still an all nighter with contingency plans and c-suite cell numbers on deck.


Salesforce has the ability for users to define their own bespoke data models. This modeling is nearly always done by sales people who may or may not be good at sales, but are almost never good at data modeling - not an insult, simply stating that data modeling isn't their job - and so the models are almost always a mess. The problems flow from there


The actual Salesforce core products are very good. Their documentation is well written. The Salesforce flow low code tool works well.

Their overly complex object/row/field permissions is a hot mess. Mulesoft is limited; there is a reason why they tried to buy Informatica.

Their marketing and hype machine hurts their credibility imo.


I'm a software engineer maybe half the age of the speaker, probably less. But I am already gradually going blind. (No one knows whether I'll reach legal blindness in 5 years or 25, but its presence in my future is a certainty.)

It's striking, if not surprising, how relatable this talk is to me. My biggest difficulties are with contrast and brightness, and it's clear that dropping night driving will be the first major life change required by my degrading vision. Many-- maybe even most-- of the compensatory measures the speaker suggests for dealing with vision loss are already part of my life.

For me, these similarities and others feel very natural and very right, even a little comforting. My eye disease is genetic, with no treatments and no cure, and so part of what it has taught and reminded me is that bodies fail, and the terms of when and how they fail isn't up to us. One of the things I really like about this talk is how unsentimental it is (without being cold, either). Disability isn't a question of if but of when and how-- and that's not a tragic thing any more than the abstract fact of death, either-- it's just a part of nature, of life.

I like this talk because it reminds us that aging, injury, disability, and adapting to those things (individually and collectively) are common threads in human lives. They're something we should know to expect. We may not be fully in control, but we can prepare-- prepare ourselves, our families, our workplaces, our governments, our cities and towns and neighborhoods-- to accommodate the inevitability of various shades and varieties of disability in ourselves and the people around us. Embracing an awareness of the universality of disability can ground us even in the face of bodily changes that frighten and grieve us. We can know they're not the end of the world because they've always been part of the world around us.

This talk may be worth a watch even if you think it's not really about you. :)


Good luck. Hope your sight will hold until they discover a cure.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: