Using gitea does not help if you goal is to allow non-auth'ed read-only access to the repo from a web browser. The scrapers use that to hit up every individual commit, over and over and over.
We used nginx config to prevent access to individual commits, while still leaving the "rest" of what gitea makes available read-only for non-auth'ed access unaffected.
Every commit. Every diff between 2 different commits. Every diff with different query parameters. Git blame for each line of each commit.
Imagine a task to enumerate every possible read-only command you could make against a Git repo, and then imagine a farm of scrapers running exactly one of them per IP address.
Ugh Ugh Ugh ... and endless ughs, when all they needed was "git clone" to get the whole thing and spend as much time and energy as they wanted analyzing it.
http {
# ... other http settings
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
# ...
}
server {
# ... other server settings
location / {
limit_req zone=mylimit burst=20 nodelay;
# ... proxy_pass or other location-specific settings
}
}
Rate limit read-only access at the very least. I know this is a hard problem for open source projects that have relied on web access like this for a while. Anubis?
> Another insight (corollary?) for Sapir-Whorf is that your language prevents you from thinking some things
Last time I looked, Sapir-Whorf is almost universally discredited among linguists and cognitive scientists.
The wikipedia summary:
"The hypothesis is in dispute, with many different variations throughout its history. The strong hypothesis of linguistic relativity, now referred to as linguistic determinism, is that language determines thought and that linguistic categories limit and restrict cognitive categories. This was a claim by some earlier linguists pre-World War II since then it has fallen out of acceptance by contemporary linguists. Nevertheless, research has produced positive empirical evidence supporting a weaker version of linguistic relativity that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them. "
It does not matter if a hypothesis is discredited if it helps you build an effective model that works. If you use a discredited hypothesis to make bread and make a great tasting and edible bread, then the hypothesis has value. Even if it is "wrong". Because it works.
Here are some question for you: can you think of any things you cannot think of in your language? Hints. Beethoven, Van Gogh, 7. Can a democracy evolve from FaceBook? What kind of political system can evolve from FaceBook? Is there a language for Democracies? The important thing is not the answer, but the thinking.
No, it doesn't work. Humans (at least) are very powerful metaphor users, and it is typically possible to discuss things for which there is no direct language in terms of metaphors (and analogies). We do this all the time, and it pretty much removes all bounds on what we can talk (and think) about with language.
Lots of poetry makes no sense if you consider it to be a series of words to be literally interpreted according to a grammar rules and a dictionary. But it can often hint at meanings and ideas that can't be expressed directly.
Of course, there are some things that always remain that are harder to get rid of. That's not "lack of language preventing you from thinking things", but rather "assumptions so deeply built into language that it is hard, though perhaps not impossible, to escape them".
Look, I thought the Sapir-Whorf hypothesis was great when I learned about it, too. I love the movie Arrival (and the Ted Chiang story it's based on). But if you aren't a social scientist, it can be very appealing (and self-defeating) to latch onto a specific concept you heard about and try to create some grand theory of the world. This is fine, but it's sophistry, not deep thinking.
At some point in the 90s I remember hearing an NPR story about a new startup that was "pioneering" technology that would basically permit atomic/small-molecule level "cat cracking" of just about anything: a furnace that was so hot that everything put inside it broke down to atoms/small-molecules which could then be fractionated off for re-use.
That one seems like it should fall foul of thermodynamics, I guess. Just melting everything together probably increases entropy to the extent that it's at best like extracting elements from mining ore. Whereas before you do that, there is organisation and substances are more concentrated. Well, that's a bit hand-wavy - perhaps someone with actual knowledge of thermodynamics will comment.
I think what recyclers do currently is at least break everything into small pieces, some of which might have a decent concentration of something useful
Nothing falls afoul of thermodynamics. This is not a closed system - you can inject as much energy into as you have available. Entropy and thermodynamics play no role here, but I would imagine that (a) the cost of the energy require (b) containment technology (c) what happens after you extract a given substance are/were all very involved in its failure.
This is already done with crude oil, and is called "cat cracking". You heat the crude oil until every component in it becomes gaseous (but still small-molecule) - the smaller the molecules they higher they rise up the "chimney", so you can siphon off particular components at particular heights.
I wasn't arguing that it is impossible, which as you say, is only true in a closed system. What I was saying was that since you wasted the organisation of the system, sorting it out again is going to take more energy than if you didn't. I think that's a consequence of the fact that you increased the entropy.
And even if you can collect those ions to relatively high purities. It is not often particularly useful. Most of the mass is probably carbon, silica, oxygen, hydrogen and so on. In the end not that much value there compared to virgin or other sources.
Why do you allow a mobile handheld computing and communication device to define "computing" ? I understand that they are important devices and lots of people with a hacker mentality would like to be able to hack them the way old folks once hacked DOS. But the current computing environment is much, much wider than iOS/Android, and if you're going to complain about just one aspect of it, I think it would be better to acknowledge that.
In many ways, things like RPi and Arduino have actually massively expanded the realm of totally hackable computing beyond what was even possible for early personal computer users.
As others have said, it's not so much that tinkering opportunities don't exist. It's more there's a slump in the market of doing relatively easy jobs for money. You can hack on esp32 all day, but there aren't many ways to make money doing so. Making software for the iPhone was (and is still, at this point) a pretty good gig.
I figure auto mechanics contended with this 25 years ago. Now it's hard to find someone to replace your water pump, if your vehicle even has one. Like auto mechanics, though, these machines still exist and there's still a big market for those skills. It might just require more legwork to find that work.
For the same reason computing used to be defined by a Commodore 64 more than by an IBM System/370-XA mainframe from the same year — they're the most commonly and most easily accessible computing devices.
Old farts like us think the desktop is the default kind of computer, but it isn't. Most computers are phones, followed by tablets and laptops with touchscreens, and desktops are the weirdest ones.
It's not a question of what's the most common. It ought to be a question of what capabilities do you think of when you think of "a computer". Most people do not think of their phone as "a computer", even though us tech heads all know that it is literally just that.
We need to follow the lead of most people here, and recognize that the phone is a deliberately limited device and its capabilities do not define what "a computer" is or should be or could be.
The computing the author enjoyed/enjoys is still out there, they are just looking for it in all the wrong places. Forget about (typical) web development (with its front and backend stacks). Forget about windows and macOS, and probably even mobile (though maybe not).
Hobby projects. C++/Rust/C/Go/some-current-Lisp. Maybe even Zig! Unix/Linux. Some sort of hardware interaction. GPL, so you can share and participate in a world of software created by people a lot more like you and a lot less like Gates and Jobs and Zuckerberg and ...
Sure, corporate programming generally tends to suck, but it always did. You can still easily do what you always loved, but probably not as a job.
At 62, as a native desktop C++ app developer doing realtime audio, my programming is as engrossing, cool, varied and awesome as it has ever been (probably even more so, since the GPL really has won in the world I live in). It hasn't been consumed by next-new-thing-ism, it hasn't been consumed by walled platforms, it hasn't been taken over by massive corporations, and it still very much involves Cool Stuff (TM).
Sure enjoy your retirement. But for me it's annoying some late 50s+ people telling what you just did. Think about people who are in their 20s or 30s - they are not even halfway through their path to retirement and some maybe even still paying out student debt.
> Stop whining and start doing stuff you love.
You have to understand that it's hard to do stuff that you love when you have to feed your family and pay mortgage or rent. Not everyone can be or want to be entrepreneur.
You are just talking from perspective of someone who already paid all debts raised all kids and now enjoying or soon will be enjoying retirement - at least meaning you can retire even if maybe don't want to.
Retired? I'm not retired and likely won't be for another 8 years.
> But for me it's annoying some late 50s+ people telling what you just did.
The author of TFA is at least 50!
> You are just talking from perspective of someone who already paid all debts raised all kids
That part is true. But that was more or less true when I was 50, too.
Finally, the article wasn't about the shitty economic world that we've created for so many people, it was about how programming has changed. Those two are inter-related but they are not the same.
> a monopoly on peer to peer sales of goods like that
I don't know ... around these parts (Santa Fe/ABQ) while Marketplace is very popular, Craigslist continues to be widely used for this, especially since an ever growing number of younger people are not on Facebook (either at all, or not regularly).
I would be just fine with a return to Craigslist but it's still mostly useless in my neck of the woods despite once being the main (digital) tool for p2p sales.
I think this is entirely the wrong way to think about this. While better elected representatives and officials would always be a nice thing, what we need is to ensure that we design systems around them that mitigate their corruption and double standards. We were even (collectively, across humanity) doing better and better at that until not that long ago.
I didn't really mean "regulations" but more a political (and civic) system in which a given individual's corruption etc. gets caught quickly and/or there are too many disincentives for them to to do much based on it.
> People just stopped caring about operating systems research and systems programming after ~2005.
and so it was that after that date, all development of
embedded systems
kernel drivers
digital audio workstations
video editors
codecs for audio and video
anything that involved actually controlling non-computer hardware
game engines
came to a grinding halt, and no further work was done.
What I mean is all of those things are more of the same things we did since 90s.
It is better and higher performing hardware but until Rust and Zig arrived, the most popular ways of designing system-level software stayed the same. RTOSes work the same as how they work in late 90s / early 00s. C ABI is still the majority of communication interface. Interacting with OS using system calls stayed the same. After virtual memory and paging no big change in OS design happened. Main programming design patterns in C and C++ also stayed the same.
One area that stayed interesting is GPU programming. Nowadays CPUs basically provide us a PDP-11 simulator. Most of the time you don't need to recompile programs to harness most of the gains from a CPU. GPUs expose more of their internal hardware detaila than CPUs and unlike CPUs you need to recompile programs (which is what a GPU userspace driver does) to use newer models.
We used nginx config to prevent access to individual commits, while still leaving the "rest" of what gitea makes available read-only for non-auth'ed access unaffected.
reply