I've been using Twin as my everyday terminal emulator and terminal multiplexer since ~2000,
slowly adding features as my free time - and other interests - allowed.
As someone pointed out, the look-and-feel reminds Borland Turbo Vision.
The reason is simple: I started writing in in the early '90s on DOS with a Borland C compiler, and I used the Borland Turbo Vision look-and-feel as a visual guideline (never actually looked at the code, though).
The porting to linux happened in 1999 (it was basically dormant before that),
and Unicode support was progressively added around 2015-2016 (initially UCS-2 i.e. only the lowest 64k codepoints, then full UTF-32 internally, with terminal emulator accepting UTF-8). There are still some missing features, most notably: no grapheme clusters, no fullwidth (asian etc.) support, no right-to-left support.
Right now I'm adding truecolor support (see https://github.com/cosmos72/twin/tree/truecolor) - it's basically finished, I'm ironing out some remaining bugs, and thinking whether wire compatibility with older versions is worth adding.
And yes, documentation has been stalled for a very long time.
Retrospectively, I should have switched C -> C++ much earlier: lots of ugly preprocessor macros accumulated over time, and while I rewrote the C widget hierarchy as C++ classes, several warts remain.
If you mean the Unicode glyphs listed at https://en.m.wikipedia.org/wiki/Block_Elements
they are supported - you just need a display driver that can render them.
For example, `twin --hw=xft` (it's the default) or `twin --hw=X11`, both with a font that contains them
Xe means the Unicode block that is actually named "Symbols For Legacy Computing". It's not in the BMP. Some bloke named Bruce was doing TUI windows with scrollbars and sizer/menu boxes some years before TurboVision and code page 437. (-:
Alas, it's not finished. You've made the mistakes that all of us have made, and haven't caught up with us, must of us having fixed those mistakes, a few years back when implementing 24-bit RGB was in vogue.
This is not, as the function name suggests, a colon, but per ITU/IEC T.416 it should be:
The unfortunate part is that when rendering to a terminal, you don't have any available mechanism apart from hand-decoding the family part of the TERM environment variable, and knowing who made which mistakes, to determine which of the 7 possible colour mechanisms are supported. They are:
1. ECMA-48 standard 8 colour, SGRs 30 to 37, 39, 40 to 47, and 49
2. AIXTerm 16 colour, ECMA-48 plus SGRs 90 to 97 and 100 to 107
3. XTerm 256 colour, ITU T.416 done wrongly with SGR 38;5;n and SGR 48;5;n
4. XTerm 256 colour corrected, ITU T.416 done right with SGR 38:5:n and SGR 48:5:n
5. 24-bit colour take 1, ITU T.416 done wrongly with SGR 38;2;r;g;b and SGR 48;2;r;g;b
6. 24-bit colour take 2, ITU T.416 done wrongly with SGR 38:2:r:g:b and SGR 48:2:r:g:b
7. 24-bit colour take 3, ITU T.416 done right with SGR 38:2::r:g:b::: and SGR 48:2::r:g:b:::
Few people support 4, and although quite a lot of us have finally got to supporting 7 it isn't quite universal. Egmont Koblinger, I, and others have been spreading the word where we can over the last few years.
There a few updates to that that are going to come out in 1.41, but when it comes to colour they're mainly things like recognizing the "ms-terminal" and "netbsd6" terminal types in the right places.
Yep, I am well aware of the `;` vs `:` confusion in both 256 color and 24-bit color control sequences.
Short of hand-coding "which terminal supports which variant" I do not know any standard mechanism to detect that (beyond the well-known $TERM=...-256color and $COLORTERM=truecolor or $COLORTERM=24bit)
I guess I'll have to add command line options to choose among the variants 1...7 you helpfully listed above.
My main use it to render twin directly on X11, which avoids all these issues, and while rendering inside another terminal is important and is not going away, I am OK with a few minor color-related limitations (note: limitations, not bugs) in such setup, especially if the other terminal does not follow the relevant standards
Compiling an expression to a tree of closures, and a list of statements to a slice of closures, is exactly how I optimized [gomacro](https://github.com/cosmos72/gomacro) my Go interpreter written in go.
There are more tricks available there, as for example unrolling the loop that calls the list of closures, and having a `nop` closure that is executed when there's nothing to run but execution is not yet at the end of the the unrolled loop.
For optimal speed, you should move as much code as possible outside the closures.
In particular, you should do the `switch op` at https://github.com/skx/simple-vm/blob/b3917aef0bd6c4178eed0c...
outside the closure, and create a different, specialised closure for each case. Otherwise the "fast interpreter" may be almost as slow as a vanilla AST walker.
The core idea is simple:
do a type analysis on each expression you want to "compile" to a closure, and instantiate the correct closure for each type combination.
Here is a pseudocode example, adapted from gomacro sources:
This works best for "compiling" statically typed languages, and while much faster than an AST interpreter, the "tree of closures" above is still ~10 times slower that natively compiled code. And it's usually also slower than JIT-compiled code
Porting to a different Scheme implementation requires some effort:
schemesh needs a good, bidirectional C FFI and an (eval) that allows any Scheme form, including definitions.
For creating a single `schemesh` executable with the usual shell-compatible options and arguments, the Scheme implementation also needs to be linkable as a library from C:
Chez Scheme provides a `kernel.o` or `libkernel.a` library that you can link into C code, then call the C functions Sscheme_init(), Sregister_boot_file() and finally Scall0(some_scheme_repl_procedure) or Sscheme_start()
Rash and schemesh start from similar ideas: create a shell scriptable in some dialect of Lisp.
Rash has several limitations, sometimes due to design choices, that schemesh solves:
1. no job control
2. multi-line editing is limited
3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
> 3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
It's possible I misunderstand what you mean because I'm not sure what piping to less is supposed to accomplish here, but this is not true. The following program works just fine:
Yes, Rash has a variety of limitations. Let me give some more context to these:
>1. no job control
Racket is missing a feature in its rktio library needed to do job control with its process API, which Rash uses. At one point I added one or two other minor features needed for job control, but I ran out of steam and never finished the final one. It's a small feature, even, though now I don't remember much of the context. I hope I wrote enough notes to go back and finish this some day.
>2. multi-line editing is limited
I always intended to write a nice line editor that would do this properly. But, again, I never got around to it. I would still like to, and I will probably take a serious look at your line editor some time.
The design was intended as something to use interactively as well as for scripting. But since I never improved the line editing situation, even I only use it for scripting. After documentation issues, this is the most pressing thing that I would fix.
>3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
As mentioned is not correct, you can recursively switch between shell and lisp.
>4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
This one is a design flaw. I've meant to go back and fix it (eg. just retrofitting a new pipe operator that returns the subprocess pipeline segment as an object rather than its ports or outputs), but, of course, haven't gotten around to it.
>5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
Yep.
>6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
So actually `#lang` is not supported at all in the REPL. It's neither supported in the Racket REPL nor the rash repl. In practice, what `#lang` does is (1) set the reader for a module, and (2) set the base import for the module, IE what symbol definitions are available. With the REPL you have to do this more manually. The repl in Racket is sort of second class in various ways, in part due to the “the top level is hopeless” problems for macros. (Search for that phrase and you can find many issues with repls and macros discussed over the years in the Racket mailing list.) Defining a new `#lang` in Racket includes various pieces about setting up modules specifically, and since the top level repl is not a module, it would need some different support that is currently not there, and would need to be retrofitted for various `#lang`s. But you can start a repl with an arbitrary reader, and use `eval` with arbitrary modules loaded or symbols defined. My intention with a rash line editor would also have been to make some infrastructure for better language-specific repls in racket generally. But, well, obviously I never actually did it. If I do make time for rash repl improvements in the near future, it will just as likely be ways for using it more nicely with emacs rather than actually writing a new line editor... we'll see.
I'm always sad when I think about how I've left Rash to languish. In grad school I was always stressed about publication (which I ultimately did poorly at), which sapped a lot of my desire and energy to actually get into the code and make improvements. Since graduating and going into industry, and with kids, I've rarely felt like I have the time or energy after all of my other responsibilities to spend time on hobby projects. Some day I would like to get back into it, fix its issues, polish it up, document it properly, etc. Alas, maybe some day.
[UPDATE] There is also a function (sh-redirect job redirection-args ...) - it can add arbitrary redirections to a job, including pipes, but it's quite low-level and verbose to use
Could this be abstracted enough with the right macros to make a subset of useful lisp commands play well with the shell? It could be a powerful way to extend the shell for interactive use.
I was thinking of a Lisp/Scheme-like frankenshell for a while. A REPL language (and especially a shell) should focus on ergonomics first - we're commanding the computer to do stuff here and now, not writing elaborate programs (usually).
In my opinion, the outmost parens (even when invoking Lisp functions), as well as all the elaborate glue function names, kinda kill it for interactive use. If you think about it, it's leaking the implementation details into the syntax, and makes for poor idioms. Not very Lispy.
My idea is something like:
>>> + 1 2 3
6
(And you would never know if it's /bin/+ or (define (+ ...)))
>>> seq 1 10 | sum
Let's assume seq is an executable, and sum is a Scheme function. Each "token" seq produces (by default delimited by all whitespace, maybe you could override the rules for a local context, parameterize?) is buffered by the shell, and at the end the whole thing turned into a list of strings. The result is passed to sum as a parameter. (Of course this would break if sum expects a list of integers, but it could also parse the strings as it goes.)
The other way around would also work. If seq produces a list of integers, it's turned into a list of strings and fed into sum as input lines.
The shell could scan $PATH and create a simple function wrapper for each executable.
Now to avoid unnecessary buffering or type conversion, a typed variant of Scheme could be used, possibly with multiple dispatch (per argument/return type). E.g. if the next function in the pipeline accepts an input port or a lazy string iterator, the preceding shell command wrapper could return an output port.
The tricky case with syntax is what to do with tokens like "-9", "3.14", etc. The lexer could store both the parsed value (if it is valid), and the original string. Depending on the context, it could be resolved to either, but retain strong (dynamic) typing when interacting with a Scheme function, so "3.14.15" wouldn't work if a typed function only accepts numbers.
> A command invocation followed by an ampersand (&) will be run in the background. Eshell has no job control, so you can not suspend or background the current process, or bring a background process into the foreground. That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs.
For things like this, we would have to switch to something like M-x shell or even M-x ansi-term. In Emacs, we have an assortment of shells and terminal implementations. As a long time Emacs user, I know when to use which, so it does not bother me. However, I can imagine how this might feel cumbersome for newer Emacs users.
In fact, this is one of the reasons I think your project is fantastic. It offers some of the Eshell-like experience, and more, to non-Emacs users, which is very compelling!
I think it comes down to that emacs itself has facilities for managing the background processes, and eshell in general tends to defer back to the surrounding editor for a lot of functionality.
So if there's an "emacs" way of doing things, generally eshell delegates to that, instead of rolling its own.
> That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs
I haven't used Eshell much, but this makes a simple "command &" arguably much saner than in a traditional Unix shell.
I imagine that a new feature would be accepted only if someone can make it play nice with existing features. And in case of job control, I have a bad feeling about the complexity involved.
Schemesh is intended as an interactive shell and REPL:
it supports line editing, autocompletion, searchable history, aliases, builtins,
a customizable prompt, and automatic loading of `~/.config/schemesh/repl_init.ss`.
Most importantly, it has job control (CTRL+Z, `fg`, `bg` etc.) and recognizes and extends
Unix shell syntax for starting, redirecting and composing jobs.
An example:
find (lisp-expression-returning-some-string) -type f | xargs ls 2>/dev/null >> ls.log &
> Scsh, in the current release, is primarily designed for the writing of shell scripts -- programming. It is not a very comfortable system for interactive command use: the current release lacks job control, command-line editing, a terse, convenient command syntax, and it does not read in an initialisation file analogous to .login or .profile
I really like how you don’t sacrifice complete command-line first shell feel, and escaping into a sane language with real datastructures is literally one character away.
Rather than the tclsh way of saying “we’ll just make the Lisp seem really shelly” which is a dud to anyone who is not a lisper.
Now, it’d be really cool if schemesh had a TUI library at the maturity level of Ratatui.
So... it sacrifices sub-shell syntax with parentheses being hijacked for Scheme. Have you also lost $(...) shell interpolation as the saner alternative to `...`?
It does not sacrifice sub-shell syntax: it is fully supported,
I just had to rename it from ( ... ) to [ ... ] to avoid conflicts with ( ... ) that switches to lisp syntax
Also, both $(...) and `...` shell interpolation are fully supported.
The only thing I intentionally sacrificed is shell flow control: schemesh shell syntax does not have the builtins 'case' 'for' 'if' 'while' etc.
In practically all examples I tried, escaping to Scheme for loops and conditional code works better: it avoids the usual pitfalls related to shell string splitting, and usually results in more readable code too, at least for a Lisper
Note: there's also some additional parsing logic to distinguish between sub-shell syntax [ ... ] and wildcard patterns that use [ ... ] as well
I have a degree in theoretical physics, and also did research on general relativity.
The result is cool, but it's not directly applicable to the traditional (sci-fi) scenario "I travel to the past and meet myself / my parents / my ancestors"
The reason is simple: the authors suppose a CLOSED timelike curve, i.e. something like a circle, where you travel back in time and BECOME your younger self - which by the way only exists because you traveled back in time in the first place.
A slightly different scenario would be much more interesting, but my guess is that it's much harder to analyze:
a NEARLY closed timelike curve, which arrives from the past, coils around itself one or more times - like a coil, indeed - allowing causal interaction between the different spires (i.e. one can interact with its future self/selves and with its past self/selves), and finally the last spire leaves toward the future.
> The reason is simple: the authors suppose a CLOSED timelike curve, i.e. something like a circle, where you travel back in time and BECOME your younger self
Exactly. This part of the paper is not really surprising or newsworthy. If you apply periodic boundary conditions, you get periodicity, duh. In the case of CTCs, this has been known for a long time[0].
> A slightly different scenario would be much more interesting, but my guess is that it's much harder to analyze: […]
Agreed. The only result I'm aware of in this context is a paper from the 90s by Echeverria, Klinkhammer, and Thorne about a thought experiment (Polchinski's Paradox) involving a billard ball entering a wormhole and colliding with its past self. Wikipedia[0] gives a good overview of the result.
More generally, imposing "self-consistency" on a closed cycle of interactions is just a matter of picking a fixed point. Such a fixed point will always exist if the underlying system is continuous - and continuity may always be assumed if the system be non-deterministic. (For example, a billiard ball enters a wormhole sending it to the past with probability 50%, or else it is knocked away by a billiard ball sent from the future (and does not enter the wormhole) with probability 50%. This system is self-consistent, but this is achieved by a "mixture" of two outcomes.)
Can the ball roll into wormhole, emerge in the past, hit its past self and stop, while its past self it knocked to roll into the wormhole, emerge in the past, hit its past self ...
Sure, this is another self-consistent solution which is discussed at length in the papers referenced above. But the neat thing about non-determinism is that it adds continuity - thus, a guaranteed existence of some self-consistent solution - even when the underlying system is discrete (as in, the ball is only allowed to either enter the wormhole on its own or be knocked off altogether - which is what creates the purported paradox).
A fixed point involving the dynamics of "complex interpersonal interactions" (to quote the above-linked Wikipedia article) that are typically involved in these purported time-travel paradoxes. Continuity of the underlying physics is enough to ensure that such a fixed point will definitely exist, and allowing for non-determinism is just a convenient way of recovering a sort of continuity even if the underlying physics is assumed to not be continuous.
(These concerns are somewhat comparable to those that involve issues of so-called "metastability" in electronic circuits and indeed other physical systems which are designed to only have a limited number of "discrete" states.)
Most 'time loops' in science fiction might better be described as time knots.
I think of https://en.wikipedia.org/wiki/Predestination_(film) which is much more complicated than the usual time travel scenario; presumably the protagonist leaves but doesn't really enter since the protagonist is their own mother and father (the matter that makes them up does enter since they eat and breathe the way everybody else does; thinking the story through I'd think if I was going to have such a miraculous and singular existence I'd rather be a fantastic creature of some kind [dragon?] as opposed to a relatively boring intersex person capable of both reproductive roles)
Also https://en.wikipedia.org/wiki/The_End_of_Eternity which tames the complexity of time travel by presupposing 'eternity' has a second time dimension, making large-scale engineering of history practical. 'Eternity' itself owes it's existence to a time loop which is ultimately broken by the protagonist.
> a NEARLY closed timelike curve, which arrives from the past, coils around itself one or more times - like a coil, indeed - allowing causal interaction between the different spires (i.e. one can interact with its future self/selves and with its past self/selves), and finally the last spire leaves toward the future.
The classic sci-fi story describing this is Heinlein's By His Bootstraps. Note, though, that even in this version, the causal interactions are fixed: the same person experiences the events multiple times from different viewpoints, but the events have to be the same each time. They can't change. In Heinlein's story, the main character tries to do something different at one of these interactions and finds that he can't.
If time is closed on itself, then by definition there can be no change from one "round" to another, you have to return to the exact world state you started in. Otherwise it wouldn't be closed. Just like a coil is not a closed shape even if its projection (a circle) is.
Isn't the cool part of this the assertion that the arrow of time flips at points of minimum and maximum entropy? In other words, it's two parallel timelines, not a continuous loop of entropic time. The article dedicates itself to proving this assertion with a bunch of math of which I understood maybe 10%.
I am not a physicist, etc so if I sound daft then that's why.
Your younger self doesn’t have to be a future state of your present self, you just have to induce it, eg, being your own father or grandfather. Your younger self doesn’t have to be in your future if you allow some overlap (father) or short gap (grandfather) on the circle.
I've been using Twin as my everyday terminal emulator and terminal multiplexer since ~2000, slowly adding features as my free time - and other interests - allowed.
As someone pointed out, the look-and-feel reminds Borland Turbo Vision. The reason is simple: I started writing in in the early '90s on DOS with a Borland C compiler, and I used the Borland Turbo Vision look-and-feel as a visual guideline (never actually looked at the code, though).
The porting to linux happened in 1999 (it was basically dormant before that), and Unicode support was progressively added around 2015-2016 (initially UCS-2 i.e. only the lowest 64k codepoints, then full UTF-32 internally, with terminal emulator accepting UTF-8). There are still some missing features, most notably: no grapheme clusters, no fullwidth (asian etc.) support, no right-to-left support.
Right now I'm adding truecolor support (see https://github.com/cosmos72/twin/tree/truecolor) - it's basically finished, I'm ironing out some remaining bugs, and thinking whether wire compatibility with older versions is worth adding.
And yes, documentation has been stalled for a very long time.
Retrospectively, I should have switched C -> C++ much earlier: lots of ugly preprocessor macros accumulated over time, and while I rewrote the C widget hierarchy as C++ classes, several warts remain.
reply