Hacker News new | past | comments | ask | show | jobs | submit | exDM69's comments login

> It was perfectly clear what Hellwig meant by "cancer".

No, it is not perfectly clear.

The generous interpretation is that they meant it is "viral", spreading through dependencies across the codebase (which I don't think is technically accurate as long as CONFIG_RUST=n exists).

The less generous way to interpret it is "harmful". The later messages in the thread suggests that this is more likely.

But I'm left guessing here, and I'm willing to give some some benefit of doubt here.

That said, the response to this comment was incredibly bad as well.


> The position of the DMA maintainer seems also to make sense for me, to keep code maintainable over decades it must remain in a nice and tidy state.

Perhaps so, but that discussion was two or three years ago. Stalling other contributors' work now is counterproductive, especially for changes that do not touch files maintained by them.

This does not justify any brigading behavior, though.


Panics also unwind the stack and run all your Drop destructors.

Setting `panic = abort` disables unwinding and this means leaking memory, not closing file descriptors, not unlocking mutexes and not rolling back database transactions on panic. It's fine for some applications to leave this to the operating system on process exit but I would argue that the default unwinding behavior is better for typical userspace applications.


All the things you describe are done automatically on program exit, even if the program is SIGKILL’ed. The kernel cleans up file descriptors, database transactions are rolled back automatically when the client disconnects (which it should be observed to be when the program exits and the kernel closes the connection as part of cleanup), and I’m not sure what you mean about mutexes, but if you mean in-memory ones, those don’t matter because the program is gone (if you mean like, file-based ones, those also should be implicitly unlocked by the kernel when the program exits, at least that’s how a good implementation is supposed to work, e.g. by writing the pid to the file or something.)

The whole of modern operating systems are already very familiar with the idea of programs not being able to exit gracefully, and there’s already a well understood category of things that happen automatically even if your program crashes ungracefully. Whole systems are designed around this (databases issuing rollbacks when the client disconnects, being a perfect example.) The best thing to do is embrace this and never, ever rely on a Drop trait being executed for correctness. Always assume you could be SIGKILLed at any time (which you always can. Someone can issue a kill -9, or you could get OOM killed, etc.)


I'm well aware of this and good that the option exists to bail out with abort instead.

But there are still cases where you would like to fsync your mmaps, print out a warning message or just make sure your #[should_panic] negative tests don't trigger false positives in your tooling (like leak detectors or GPU validators) or abort the whole test run.

It's not perfect by any means but it's better than potentially corrupting your data when a trivial assert fires or making negative tests spew warnings in ci runs.

It's very easy to opt out from, and I don't consider the price of panic handlers and unwinding very expensive for most use cases.


Right, sorry for the patronizing tone, I’m sure you know all this.

But I tend to lament the overall tendency for people to write cleanup code in general for this kind of thing. It’s one of those “lies programmers believe about X” kinds of scenarios. Your program will crash, and you will hit situations where your cleanup code will not run. You could get OOM killed. The user can force quit you. Hell, the power could go out! (Or the battery could go dead, etc.)

Nobody should ever write code that is only correct if they are given the opportunity to perfectly clean up after any failure that happens.

I see this all the time: CLI apps that trap Ctrl-C and tell you you can’t quit (yes I bloody well can, kill -9 is a thing), apps which don’t bother double checking that the files they left behind on a previous invocation are actually still used (stale pid files!!!), coworkers writing gobs of cleanup code that could have been avoided by simply doing nothing, etc etc.


If I understood the algorithm correctly, it is using the standard WFC algorithm to generate blocks that match a constraint.

Then it creates a tiling of those blocks, and substitutes parts of the tiling with new blocks generated using WFC.

So it's a higher level algorithm, using WFC as its building block.


Yes, this is the principle that violates the WFC algorithm and makes it no longer WFC

It is now just a procedural algorithm, which is faster than but loses some of the magic of what makes WFC _so good_.

You can tell by looking at the renders too, the before-and-after of both methods. The difference is incomparable.

That being said, it is cool as a runtime-optimized non-WFC WFC-approximating algorithm.


Here's another interesting O(1) memory allocator but with arbitrary sized allocations and low fragmentation. Negative side is relatively high memory overhead (a few dozen bytes per allocation).

This kind of allocator is often used to suballocate GPU memory in game and graphics applications.

I'm using a variant of this algorithm with added support for shrinkable and aligned allocations and flexible bin sizing.

You can also extend this idea to two dimensions to create texture atlases, which is possible in O(1) for power of two sized allocations.

Original: https://github.com/sebbbi/OffsetAllocator Rust port: https://crates.io/crates/offset-allocator


Very delightful article. Based on my experience in "hobby" OS programming, I would add setting up GDB debugging as early as possible. It was a great help in my projects and an improvement over debugging with the QEMU monitor only.

QEMU contains a built-in GDB server, you'll need a GDB client built for the target architecture (riscv in this case) and connecting to the QEMU GDB server over the network.

https://qemu-project.gitlab.io/qemu/system/gdb.html


Agree, and I'll add 3 other really useful QEMU features for osdev:

1) Record & Replay: Record an execution and replay it back. You can even attach GDB while replaying, and go back in time while debugging with "reverse-next" and "reverse-continue": https://qemu-project.gitlab.io/qemu/system/replay.html

2) The QEMU monitor, especially the "gva2gpa" and "xp" commands which are very useful to debug stuff with virtual memory

3) "-d mmu,cpu_reset,guest_errors,unimp": Basically causes QEMU to log when your code does something wrong. Also check "trace:help", there's a bunch of useful stuff to debug drivers


Record & replay sounds really nice, but the actual reverse-debugging is broken, see https://gitlab.com/qemu-project/qemu/-/issues/2634


thanks for sharing! qemu is very powerful, but it’s hard to discocer a lot of these features


> you'll need a GDB client built for the target architecture

Thankfully, GDB has a multiarch build these days which should work for all well-behaved targets in a single build.

(the place it is known to fail is for badly-behaved (embedded?) targets where there are configuration differences but no way to identify them)


> Rust game development seems more about releasing half baked crates than actual games

It's because these are mostly passion projects by hobbyists. A lot of the stuff is written by undergrad students with a lot of time on their hands, and once they graduate and move into professional life they no longer have the time and the projects get abandoned.

Creating high quality, reusable components (like game engines) takes a lot of effort. It's unlikely to happen without funding.

And that's only half of the story, to make an actual game (or other product) you'll still need the art content which is expensive.

I'm about 3 years worth of Saturdays and 20kLOC into my current project in this field and I haven't even released anything because I don't need anyone to tell me it's "half baked". While I have some stuff that's starting to be in a shape where it could be usable to others, it would still need a lot of effort to make it friendly for others (e.g. api docs, examples and stuff) to jump in. That again takes effort (there are only so many Saturdays in a year) and it's much less interesting work than doing the research-ey bits and experimental work.

Unfortunately I think this is a chicken and egg problem. No-one with the funds and the staffing to build a game or an engine isn't going to jump in to unproven technology. Not using Unreal or Unity is a huge gamble to take with your business venture.


There's nothing wrong with writing your own game engine rather than going for Unreal or Unity, in fact that's one way of creating a truly unique game.

My comment about "half baked" referred to the crates that people release instead of actual games.

The Rust game dev community might be wise to steer away from the "we gave up on developing our game, but hey Rustaceans, here are some crates you might find useful!" approach


despite being in the same industry, engine programmers =/= game designers. You may as well be comparing a network engineer to a front end web dev.

As for "actual games": indie dev with mature tools is hard enough as is to properly ship. and many don't make money. Making the kind of game that would attract attention requires funding that these communities often lack.


I was thinking of the kinds of games mentioned in this thread: Tiny Glade and The Gnorp Apologue. Small indie games made by people who fill the role of both engine programmer and game designer.

On a larger, more professional scale someone like Jonathan Blow comes to mind


There are plenty of passion projects that have been successful. So this is not an excuse. I've abandoned rust because I don't find it useable to me with unneeded complexity to code. Also with graphics most of the code was not safe. So believing only took me that far.


so the retort is survivor's bias? Hollow knight was a game jam game, so my 2d autorunner definitely coulda made 8m dollars, right?

>I've abandoned rust because I don't find it useable to me with unneeded complexity to code.

I don't want to be too dismissive, but if you dont care about code safety, Rust is the absolute worst language to choose for game development. Yes, it's a lot more work upfront and iterative game development wants to break things quickly to figure out a good game loop.

I want to make a game myself in Rust one day and I know for certain that my scripting will definitely not be in Rust.


Dude, it is not safe programming if you put all your code under unsafe brackets.

You can write safe code in other languages but it really requires more advanced programmers than Rust programmers will ever be - the idea of Rust is to simply taking out responsibility from all programmers and puts it on Rust developers, so how much you care about code safety - that is not what you need to think when programming in Rust.

>>>I want to make a game myself in Rust one day and I know for certain that my scripting will definitely not be in Rust.

I mean - do I need to say more?


>it is not safe programming if you put all your code under unsafe brackets.

Put less code in unsafe brackets. I haven't seen how a proper renderer is made in Rust, but I'd be shocked if something on the scale of Bevy's still was just "Rust without rust" as a quick c++ port.

I'm sure there Wil inevitably be some low level hardware tricks that need unsafe blocks, but that's much less needed in most modern code than back in the day. And if we're being frank, those kinds of optimizations probably aren't top priority compares to, say, a proper front end scene graph to interact with.

>I mean - do I need to say more?

That every language has strengths and weaknesses? I'm all for any wisdom you wish to share. I won't pretend to be an expert in any language.

My design decision (or rather, suspicion) comes more from the fact that scripting needs different demands (rapid iteration) than the underlying foundation (rendering/physics/asset management that can create the nastiest kinds of bugs). Therss inevitable issue bridging languages, but I think overall it would give the best of both worlds.


> Dude, it is not safe programming if you put all your code under unsafe brackets.

I'd be surprised in my Rust game (custom engine) had more than 1% code in unsafe blocks. If your does have "ALL under unsafe brackets" you are doing Rust really wrong.


The distance from Helsinki to Tallinn is just above 80 km and there is only a narrow sliver of international waters in between the territorial waters. It is quite well monitored by surface and underwater surveillance.

In this particular case there is no need for a choke point, it already is one.

The ship suspected for this sabotage was promptly escorted to Finnish territorial waters and is guarded by coast guard vessels and helicopters.

This time the ship in question is under Cook Island / New Zealand flag, and they are more likely to cooperate than the previous instances where vessels under Hong Kong and Chinese flags. Apprehending a vessel on international waters requires the nation whose flag it's under to approve.


In Vim, the "stupid" auto complete will get 80% of the way there and works without setting up a language server and it works for any language.

By default it will give all the identifiers in all open files. Having a few of the relevant files open in the editor will get you pretty far.

I do use LSP in other environments where it is available but I still do a lot of my work with just plain vim because I jump between code bases a lot and setting up LSP for C/C++ needs some extra steps to work.


Why does adding `backtrace` to thiserror/anyhow require adding debug symbols?

You'll certainly need it if you want to have human readable source code locations, but doesn't it work with addresses only? Can't you split off the debug symbols and then use `addr2line` to resolve source code locations when you get error messages from end users running release builds?


It should be possible (it'd need to also save memory map), but for some reason Rust's standard library wants to resolve human-readable paths at runtime.

Additionally, Rust has absurdly overly precise debug info.

Even set to minimum detail, it's still huge, and still keeps all of the layers of those "zero-cost" abstractions that were removed from the executable, so every `for` loop and every arithmetic operation has layers upon layers of debug junk.

External debug info is also more fragile. It's chronically broken on macOS (Rust doesn't test it with Apple's tools). On Linux, it often needs to use GNU debuginfo and be placed in system-wide directories to work reliably.


> (it'd need to also save memory map

Typically the memory map is only required when capturing the backtrace and when outputting the stack frames' addresses relative the the binary file sections are given/stored/printed (with the load time address subtracted). E.g. SysRq+l on Linux. This occurs at runtime so saving the memory map is not necessary in addition to the relative addresses.

Not sure if this is viable on all the platforms that Rust supports.

> but for some reason Rust's standard library wants to resolve human-readable paths at runtime.

Ah, I see that Rust's `std::backtrace::Backtrace` is missing any API to extract address information and it does not print the address infos either. Even with the `backtrace_frames` feature you only get a list of frames but no useful info can be extracted.

Hopefully this gets improved soon.

> External debug info is also more fragile.

I use external debug info all the time because uploading binaries with debug symbols to the (embedded) devices I run the code on is prohibitively expensive. It needs some extra steps in debugging but in general it seems to work reliably at least on the platforms I work with. The debugger client runs on my local computer with the debug symbols on disk and the code runs under a remote debugger on the device.

I'm sure there are flaky platforms that are not as reliable.


Your binary usually won't get loaded at the same address in memory. The addresses would be useless without the memory map.

That's solvable though. The bigger problem is how you unwind the stack. the stack is not generally unwindable, unless you're the compiler. Debug symbols include information from the compiler about the stack sizes and shapes to help backtrace with unwinding the stack. It's quite possible to include such symbols in the final binary without adding debug symbols, a lot of compilers just don't have a specification for that.


You don’t need debug symbols to unwind the stack, you just need the .eh_frame section, which compilers emit by default regardless of whether you’re building with debug symbols.

Source: I work on a profiler (Parca) that does stack unwinding. It works fine on Rust binaries with or without debug symbols.


> Your binary usually won't get loaded at the same address in memory.

The addresses you typically see in a backtrace error message (with debug syms disabled) are relative to the sections in the binary file, the runtime address it was loaded at has already been taken into account and subtracted. At least that's how you typically see a backtrace address in a typical native app on Linux.

> The bigger problem is how you unwind the stack.

Rust can unwind the stack on panic when built without debug symbols.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: