From athornton at gmail.com Wed Feb 2 14:53:50 2022 From: athornton at gmail.com (Adam Thornton) Date: Tue, 1 Feb 2022 21:53:50 -0700 Subject: [COFF] [TUHS] Compilation "vs" byte-code interpretation, was Re: Looking back to 1981 - what pascal was popular on what unix? In-Reply-To: References: <0f83f174-eeca-30fb-7b98-77fb0da80f2e@gmail.com> <9E47A62E-3AAD-491E-9164-3DCAD22EC1F7@kdbarto.org> <71ce6652-cf15-44db-01df-62ab89a5a134@gmail.com> Message-ID: On Mon, Jan 31, 2022 at 10:17 AM Paul Winalski wrote: > On 1/30/22, Steve Nickolas wrote: > > And I think I've heard the Infocom compilers' bytecode called "Z-code" (I > > use this term too). > That is correct. The Infocom games ran on an interpreter for an > abstract machine called the Z-machine. Z-code is the Z-machine's > instruction set. There is a freeware implementation out there called > Frotz. > > There's a reasonably functional Frotz implementation for TOPS-20, as it happens. The ZIP interpreter was easier to port to 2.11BSD on the PDP-11. https://github.com/athornton/tops20-frotz Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at rulingia.com Wed Feb 2 18:46:34 2022 From: peter at rulingia.com (Peter Jeremy) Date: Wed, 2 Feb 2022 19:46:34 +1100 Subject: [COFF] Compilation "vs" byte-code interpretation, was Re: Looking back to 1981 - what pascal was popular on what unix? In-Reply-To: References: <0f83f174-eeca-30fb-7b98-77fb0da80f2e@gmail.com> <9E47A62E-3AAD-491E-9164-3DCAD22EC1F7@kdbarto.org> Message-ID: => COFF On 2022-Jan-30 10:07:15 -0800, Dan Stromberg wrote: >On Sun, Jan 30, 2022 at 8:58 AM David Barto wrote: > >> Yes, the UCSD P-code interpreter was ported to 4.1 BSD on the VAX and it >> ran natively there. I used it on sdcsvax in my senior year (1980). > >This reminds me of a question I've had percolating in the back of my mind. > >Was USCD Pascal "compiled" or "interpreted" or both? > >And is Java? They both have a byte code interpreter. A bit late to the party but my 2¢: I think it's fairly clear that both UCSD Pascal and Java are compiled - to binary machine code for a p-code machine or JVM respectively. That's no different to compiling (eg) C to PDP-11 or amd64 binary machine code. As for how the machine code is executed: * p-code was typically interpreted but (as mentioned elsewhere) there were a number of hardware implementions. * Java bytecode is often executed using a mixture of interpretation and (JIT) compilation to the host's machine code. Again there are a number of hardware implementations. And looking the other way, all (AFAIK) PDP-11's were microcoded, therefore you could equally well say that PDP-11 machine code is being interpreted by the microcode on a "real" PDP-11. And, nowadays, PDP-11 machine code is probably more commonly interpreted using something like simh than being run on a hardware PDP-11. Typical amd64 implementations are murkier - with machine code being further converted ("compiled"?) into a variable number of micro-ops that have their own caches and are then executed on the actual CPU. (And, going back in time, the Transmeta Crusoe explicity did JIT conversion from iA32 machine code to its own proprietary machine code). -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jnc at mercury.lcs.mit.edu Wed Feb 2 20:55:07 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 2 Feb 2022 05:55:07 -0500 (EST) Subject: [COFF] Compilation "vs" byte-code interpretation, was Re: Looking back to 1981 - what pascal was popular on what unix? Message-ID: <20220202105507.DF17B18C084@mercury.lcs.mit.edu> > From: Peter Jeremy > all (AFAIK) PDP-11's were microcoded Not the -11/20; it pre-dated the fast, cheap ROMs needed to go the micocode route, so it used a state machine. All the others were, though (well, I don't know about the Mentec ones). Noel From cym224 at gmail.com Thu Feb 3 07:15:08 2022 From: cym224 at gmail.com (Nemo Nusquam) Date: Wed, 2 Feb 2022 16:15:08 -0500 Subject: [COFF] SICP [Was: Re: [TUHS] ratfor vibe] (moved to COFF) In-Reply-To: <65068AA0-BFEF-46B8-9068-2A24039371D3@acm.org> References: <20220201181909.6224518C086@mercury.lcs.mit.edu> <65068AA0-BFEF-46B8-9068-2A24039371D3@acm.org> Message-ID: <15df45d9-29cc-1712-e7de-967076306b86@gmail.com> Replying on COFF as firmly in COFF territory. On 2022-02-01 16:50, Win Treese wrote: >> On Feb 1, 2022, at 1:19 PM, Noel Chiappa wrote: >> >> From: Clem Cole >>> So by the late 70s/early 80s, [except for MIT where LISP/Scheme reigned] >> Not quite. The picture is complicated, because outside the EECS department, >> they all did their own thing - e.g. in the mid-70's I took a programming >> intro couse in the Civil Engineering department which used Fortran. But in >> EECS, in the mid-70's, their intro programming course used assembler >> (PDP-11), Algol, and LISP - very roughly, a third of the time in each. Later >> on, I think it used CLU (hey, that was MIT-grown :-). I think Scheme was used >> later. In both of these cases, I have no idea if it was _only_ CLU/Scheme, or >> if they did part of it in other languages. > I took 6.001 (with Scheme) in the spring of 1983, which was using a course > handout version of what became Structure and Interpretation of Computer > Programs by Sussman and Abelson. My impression was that it had been > around for a year before that, but not much more, and it was part of > revamping the EECS core curriculum at the time. I recall that one of the SICP authors wrote an interesting summary of 6.001 (with Scheme) but I cannot find it. Incidentally, SICP with Javascript will be released next year: https://mitpress.mit.edu/books/structure-and-interpretation-computer-programs-1 N. > In at least the early 80s, CLU was used in 6.170, Software Engineering > Laboratory, in which a big project was writing a compiler. > > And Fortran was still being taught for the other engineering departments. > In 1982(ish), those departments had the Joint Computing Facility for a lot > of their computing, of which the star then was a new VAX 11/782. > > - Win > From crossd at gmail.com Sat Feb 5 09:18:09 2022 From: crossd at gmail.com (Dan Cross) Date: Fri, 4 Feb 2022 18:18:09 -0500 Subject: [COFF] [TUHS] more about Brian... [really Rust] In-Reply-To: <202202040234.2142YeKN3307556@darkstar.fourwinds.com> References: <202202011537.211FbYSe017204@freefriends.org> <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <202202040234.2142YeKN3307556@darkstar.fourwinds.com> Message-ID: [TUHS to Bcc, +COFF ] This isn't exactly COFF material, but I don't know what list is more appropriate. On Thu, Feb 3, 2022 at 9:41 PM Jon Steinhart wrote: > Adam Thornton writes: > > Do the august personages on this list have opinions about Rust? > > People who generally have tastes consonant with mine tell me I'd like > Rust. > > Well, I'm not an august personage and am not a Rust programmer. I did > spend a while trying to learn rust a while ago and wasn't impressed. > > Now, I'm heavily biased in that I think that it doesn't add value to keep > inventing new languages to do the same old things, and I didn't see > anything > in Rust that I couldn't do in a myriad of other languages. > I'm a Rust programmer, mostly using it for bare-metal kernel programming (though in my current gig, I find myself mostly in Rust userspace...ironically, it's back to C for the kernel). That said, I'm not a fan-boy for the language: it's not perfect. I've written basically four kernels in Rust now, to varying degrees of complexity from, "turn the computer on, spit hello-world out of the UART, and halt" to most of a v6 clone (which I really need to get around to finishing) to two rather more complex ones. I've done one ersatz kernel in C, and worked on a bunch more in C over the years. Between the two languages, I'd pick Rust over C for similar projects. Why? Because it really doesn't just do the same old things: it adds new stuff. Honest! Further, the sad reality (and the tie-in with TUHS/COFF) is that modern C has strayed far from its roots as a vehicle for systems programming, in particular, for implementing operating system kernels ( https://arxiv.org/pdf/2201.07845.pdf). C _implementations_ target the abstract machine defined in the C standard, not hardware, and they use "undefined behavior" as an excuse to make aggressive optimizations that change the semantics of one's program in such a way that some of the tricks you really do have to do when implementing an OS are just not easily done. For example, consider this code: uint16_t mul(uint16_t a, uint16_t b) { return a * b; } Does that code ever exhibit undefined behavior? The answer is that "it depends, but on most platforms, yes." Why? Because most often uint16_t is a typedef for `unsigned short int`, and because `short int` is of lesser "rank" than `int` and usually not as wide, the "usual arithmetic conversions" will apply before the multiplication. This means that the unsigned shorts will be converted to (signed) int. But on many platforms `int` will be a 32-bit integer (even 64-bit platforms!). However, the range of an unsigned 16-bit integer is such that the product of two uint16_t's can include values whose product is larger than whatever is representable in a signed 32-bit int, leading to overflow, and signed integer overflow is undefined overflow is undefined behavior. But does that _matter_ in practice? Potentially: since signed int overflow is UB, the compiler can decide it would never happen. And so if the compiler decides, for whatever reason, that (say) a saturating multiplication is the best way to implement that multiplication, then that simple single-expression function will yield results that (I'm pretty sure...) the programmer did not anticipate for some subset of inputs. How do you fix this? uint16_t mul(uint16_t a, uint16_t b) { unsigned int aa = a, bb = b; return aa * bb; } That may sound very hypothetical, but similar things have shown up in the wild: https://people.csail.mit.edu/nickolai/papers/wang-undef-2012-08-21.pdf In practice, this one is unlikely. But it's not impossible: the compiler would be right, the programmer would be wrong. One thing I've realized about C is that successive generations of compilers have tightened the noose on UB so that code that has worked for *years* all of a sudden breaks one day. There be dragons in our code. After being bit one too many times by such issues in C I decided to investigate alternatives. The choices at the time were either Rust or Go: for the latter, one gets a nice, relatively simple language, but a big complex runtime. For the former, you get a big, ugly language, but a minimal runtime akin to C: to get it going, you really don't have to do much more than set up a stack and join to a function. While people have built systems running Go at the kernel level ( https://pdos.csail.mit.edu/papers/biscuit.pdf), that seemed like a pretty heavy lift. On the other hand, if Rust could deliver on a quarter of the promises it made, I'd be ahead of the game. That was sometime in the latter half of 2018 and since then I've generally been pleasantly surprised at how much it really does deliver. For the above example, integer overflow is defined to trap. If you want wrapping (or saturating!) semantics, you request those explicitly: fn mul(a: u16, b: u16) -> u16 { a.wrapping_mul(b) } This is perfectly well-defined, and guaranteed to work pretty much forever. But, my real issue came from some of the tutorials that I perused. Rust is > being sold as "safer". As near as I can tell from the tutorials, the model > is that nothing works unless you enable it. Want to be able to write a > variable? Turn that on. So it seemed like the general style was to write > code and then turn various things on until it ran. > That's one way to look at it, but I don't think that's the intent: the model is rather, "immutable by default." Rust forces you to think about mutability, ownership, and the semantics of taking references, because the compiler enforces invariants on all of those things in a way that pretty much no other language does. It is opinionated, and not shy about sharing those opinions. To me, this implies a mindset that programming errors are more important > than thinking errors, and that one should hack on things until they work > instead of thinking about what one is doing. I know that that's the > modern definition of programming, but will never be for me. It's funny, I've had the exact opposite experience. I have found that it actually forces you to invest a _lot_ more in-up front thought about what you're doing. Writing code first, and then sprinkling in `mut` and `unsafe` until it compiles is a symptom of writing what we called "crust" on my last project at Google: that is, "C in Rust syntax." When I convinced our team to switch from C(++) to Rust, but none of us were really particularly adept at the language, and all hit similar walls of frustration; at one point, an engineer quipped, "this language has a near-vertical learning curve." And it's true that we took a multi-week productivity hit, but once we reached a certain level of familiarity, something equally curious happened: our debugging load went way, _way_ down and we started moving much faster. It turned out it was harder to get a Rust program to build at first, particularly with the bad habits we'd built up over decades of whatever languages we came from, but once it did those programs very often ran correctly the first time. You had to think _really hard_ about what data structures to use, their ownership semantics, their visibility, locking, etc. A lot of us had to absorb an emotional gut punch when the compiler showed us things that we _knew_ were correct were, in fact, not correct. But once code compiled, it tended not to have the kinds of errors that were insta-panics or triple faults (or worse, silent corruption you only noticed a million instructions later): no dangling pointers, no use-after-free bugs, no data races, no integer overflow, no out-of-bounds array references, etc. Simply put, the language _forced_ a level of discipline on us that even veteran C programmers didn't have. It also let us program at a moderately higher level of abstraction; off-by-one errors were gone because we had things like iterators. ADTs and a "Maybe" monad (the `Result` type) greatly improved our error handling. `match` statements have to be exhaustive so you can't add a variant to an enum and forget to update code to account in just that one place (the compiler squawks at you). It's a small point, but the `?` operator removed a lot of tedious boilerplate from our code, making things clearer without sacrificing robust failure handling. Tuples for multiple return values instead of using pointers for output arguments (that have to be manually checked for validity!) are really useful. Pattern matching and destructuring in a fast systems language? Good to go. In contrast, I ran into a "bug" of sorts with KVM due to code I wrote that manifested itself as an "x86 emulation error" when it was anything but: I was turning on paging very early in boot, and I had manually set up an identity mapping for the low 4GiB of address space for the jump from 32-bit to 64-bit mode. I used gigabyte pages since it was easy, and I figured it would be supported, but I foolishly didn't check the CPU features when running this under virtualization for testing and got that weird KVM error. What was going on? It turned out KVM in this case didn't support gig pages, but the hardware did; the software worked just fine until the first time the kernel went to do IO. Then, when the hypervisor went to fetch the instruction bytes to emulate the IO instruction, it saw the gig-sized pages and errored. Since the incompatibility was manifest deep in the bowels of the instruction emulation code, that was the error that returned, even though it had nothing to do with instruction emulation. It would have been nice to plumb through some kind of meaningful error message, but in C that's annoying at best. In Rust, it's trivial. https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ 70% of CVEs out of Microsoft over the last 15 years have been memory safety issues, and while we may poo-poo MSFT, they've got some really great engineers and let's be honest: Unix and Linux aren't that much better in this department. Our best and brightest C programmers continue to turn out highly buggy programs despite 50 years of experience. But it's not perfect. The allocator interface was a pain (it's defined to panic on allocation failure; I'm cool with a NULL return), though work is ongoing in this area. There's no ergonomic way to initialize an object 'in-place' (https://mcyoung.xyz/2021/04/26/move-ctors/), and there's no great way to say, essentially, "this points at RAM; even though I haven't initialized it, just trust me don't poison it" ( https://users.rust-lang.org/t/is-it-possible-to-read-uninitialized-memory-without-invoking-ub/63092 -- we really need a `freeze` operation). However, right now? I think it sits at a local maxima for systems languages targeting bare-metal. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfawcus+lists-coff at employees.org Sun Feb 6 09:09:19 2022 From: dfawcus+lists-coff at employees.org (Derek Fawcus) Date: Sat, 5 Feb 2022 23:09:19 +0000 Subject: [COFF] Zig (was Re: more about Brian... [really Rust]) In-Reply-To: References: <202202011537.211FbYSe017204@freefriends.org> <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <202202040234.2142YeKN3307556@darkstar.fourwinds.com> Message-ID: On Fri, Feb 04, 2022 at 06:18:09PM -0500, Dan Cross wrote: > [TUHS to Bcc, +COFF ] > > This isn't exactly COFF material, but I don't know what list is more > appropriate. > [snip] > However, right now? I think it > sits at a local maxima for systems languages targeting bare-metal. Have you played with Zig? I've only just started, but it does seem to be trying to address a number of the issues with C ub, and safety, while sticking closer to the 'C' space vs where I see Rust targetting the 'C++' space. It doesn't have Rust's ownership / borrow checker stuff, it does seem to have bounds checking on arrays. e.g. the UB for multiply example you give ends up as a run time panic (which I suspect can be caught), or one can use a different (wrapping) multiply operator similar to in Rust. i.e. see the below test program and its output. DF $ cat main.zig const std = @import("std"); pub fn mulOverflow(a: u16, b: u16) u16 { return a * b; } pub fn mulWrap(a: u16, b: u16) u16 { return a *% b; } pub fn main() void { const result1 = mulWrap(65535, 4); std.debug.print("mulWrap is {d}\n", .{result1}); const result2 = mulOverflow(65535, 4); std.debug.print("mulOverflow is {d}\n", .{result2}); } $ ./main mulWrap is 65532 thread 32589 panic: integer overflow /home/derek/Code/zig-play/main.zig:4:14: 0x2347bd in mulOverflow (main) return a * b; ^ /home/derek/Code/zig-play/main.zig:15:32: 0x22cfda in main (main) const result2 = mulOverflow(65535, 4); ^ /usr/local/zig-linux-x86_64-0.9.0/lib/std/start.zig:543:22: 0x225d5c in std.start.callMain (main) root.main(); ^ /usr/local/zig-linux-x86_64-0.9.0/lib/std/start.zig:495:12: 0x20713e in std.start.callMainWithArgs (main) return @call(.{ .modifier = .always_inline }, callMain, .{}); ^ /usr/local/zig-linux-x86_64-0.9.0/lib/std/start.zig:409:17: 0x2061d6 in std.start.posixCallMainAndExit (main) std.os.exit(@call(.{ .modifier = .always_inline }, callMainWithArgs, .{ argc, argv, envp })); ^ /usr/local/zig-linux-x86_64-0.9.0/lib/std/start.zig:322:5: 0x205fe2 in std.start._start (main) @call(.{ .modifier = .never_inline }, posixCallMainAndExit, .{}); ^ Aborted $ zig build-exe -O ReleaseFast main.zig $ ./main mulWrap is 65532 mulOverflow is 65532 $ zig build-exe -O ReleaseSafe main.zig $ ./main mulWrap is 65532 thread 32608 panic: integer overflow Aborted -- From ralph at inputplus.co.uk Sun Feb 6 22:32:06 2022 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Sun, 06 Feb 2022 12:32:06 +0000 Subject: [COFF] Zig In-Reply-To: References: <202202011537.211FbYSe017204@freefriends.org> <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <202202040234.2142YeKN3307556@darkstar.fourwinds.com> Message-ID: <20220206123206.12E221FE27@orac.inputplus.co.uk> Hi Derek, Thanks for the Zig example. Some months ago, I browsed the site and then read In-depth Overview Here’s an in-depth feature overview of Zig from a systems-programming perspective. https://ziglang.org/learn/overview/ and was impressed given a background of C, assembler, and bare metal. The language has a clarity of design in a similar way to early Perl and Python and seems to me the best of the competitors in the Rust area, which I dislike. Nice features include C-library integration without FFI or bindings as Zig is also a C compiler. > i.e. see the below test program and its output. ... > $ ./main To explain for others, the verbose stack backtrace in the first run comes from building the executable in the default ‘Debug’ build mode. There are four modes available, listed in the above overview. Runtime safety Optimisation Debug Crash with backtrace ReleaseSafe Crash with backtrace -O3 ReleaseFast Undefined behaviour -O3 ReleaseSmall Undefined behaviour -Os -- Cheers, Ralph. From crossd at gmail.com Sun Feb 6 22:49:53 2022 From: crossd at gmail.com (Dan Cross) Date: Sun, 6 Feb 2022 07:49:53 -0500 Subject: [COFF] Zig (was Re: more about Brian... [really Rust]) In-Reply-To: References: <202202011537.211FbYSe017204@freefriends.org> <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <202202040234.2142YeKN3307556@darkstar.fourwinds.com> Message-ID: On Sat, Feb 5, 2022 at 6:18 PM Derek Fawcus < dfawcus+lists-coff at employees.org> wrote: > On Fri, Feb 04, 2022 at 06:18:09PM -0500, Dan Cross wrote: > > [TUHS to Bcc, +COFF ] > > > > This isn't exactly COFF material, but I don't know what list is more > > appropriate. > > > > [snip] > > > However, right now? I think it > > sits at a local maxima for systems languages targeting bare-metal. > > Have you played with Zig? I've only just started, but it does seem to > be trying to address a number of the issues with C ub, and safety, > while sticking closer to the 'C' space vs where I see Rust targetting > the 'C++' space. > > It doesn't have Rust's ownership / borrow checker stuff, it does seem > to have bounds checking on arrays. > To be fair, I haven't given zig an honest shake yet. That said, the borrow checker and ownership are a major part of what makes Rust really useful: it dramatically reduces the burden of manual memory management. True, it also means that some of the things one would like to do are annoying (mutually self-referential data structures can be rough; self-referential structures similarly since a move is conceptually equivalent to memcpy). My cursory scan says that Zig already has a lot over C for this space, though. e.g. the UB for multiply example you give ends up as a run time panic > (which I suspect can be caught), or one can use a different (wrapping) > multiply operator similar to in Rust. > i.e. see the below test program and its output. > Nice. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Mon Feb 7 00:13:56 2022 From: crossd at gmail.com (Dan Cross) Date: Sun, 6 Feb 2022 09:13:56 -0500 Subject: [COFF] [TUHS] more about Brian... In-Reply-To: References: <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <1644006490.2458.78.camel@mni.thm.de> <20220206005609.GG3045@mcvoy.com> <21015c2c-2652-bbc3-dbd7-ad3c31f485a2@gmail.com> Message-ID: Oh dear. This is getting a little heated. TUHS to Bcc:, replies to COFF. On Sun, Feb 6, 2022 at 8:15 AM Ed Carp wrote: > Since you made this personal and called me out specifically, I will > respond: > > "In what way is automatic memory management harder, more unsafe, and > less robust than hand-written memory management using malloc and > free?" > > Because there's no difference in the two. Someone had to write the > "automatic memory management", right? > I cannot agree with this, there is a big difference. With GC, you are funneling all of the fiddly bits of dealing with memory management through a runtime that is written by a very small pool of people who are intimately familiar with the language, the runtime, the compilation environment, and so on. That group of subject matter experts produce a system that is tested by every application (much like the _implementation_ of malloc/free itself, which is not usually reproduced by every programmer who _uses_ malloc/free). It's like in "pure" functional languages such as Haskell, where everything is immutable: that doesn't mean that registers don't change values, or that memory cells don't get updated, or that IO doesn't happen, or the clock doesn't tick. Rather, it means that the programmer makes a tradeoff where they cede control over those things to the compiler and a runtime written by a constrained set of contributors, in exchange for guarantees those things make about the behavior of the program. With manual malloc/free, one smears responsibility for getting it right across every program that does dynamic memory management. Some get it right; many do not. In many ways, the difference between automatic and manual memory management is like the difference between programming in assembler and programming in a high-level language. People have written reliable, robust assembler for decades (look at the airline industry), but few people would choose to do so today; why? Because it's tedious and life is too short as it is. Further, the probability of error is greater than in a high-level language; why tempt fate? [snip] > "This discussion should probably go to COFF, or perhaps I should just > leave the list. I am starting to feel uncomfortable here. Too much > swagger." > > I read through the thread. Just because people don't agree with each > other doesn't equate to "swagger". I've seen little evidence of > anything other than reasoned analysis and rational, respectful > discussion. Was there any sort of personal attacks that I missed? > It is very difficult, in a forum like this, to divine intent. I know for a fact that I've written things to this list that were interpreted very differently than I meant them. That said, there has definitely been an air that those who do not master manual memory management are just being lazy and that "new" programmers are unskilled. Asserting that this language or that is "ours" due to its authors while that is "theirs" or belongs solely to some corporate sponsor is a bit much. The reality is that languages and operating systems and hardware evolve over time, and a lot of the practices we took for granted 10 years ago deserve reexamination in the light of new context. There's nothing _wrong_ with that, even if it may be uncomfortable (I know it is for me). The fact of the matter is, code written with malloc/free, if written > carefully, will run for *years*. There are Linux boxes that have been > running for literally years without being rebooted, and mainframes and > miniframes that get booted only when a piece of hardware fails. > That there exist C programs that have run for many years without faults is indisputable. Empirically, people _can_ write reliable C programs, but it is often harder than it seems to do so, particularly since the language standard gives so much latitude for implementations to change semantics in surprising ways over time. Just in the past couple of weeks a flaw was revealed in some Linux daemon that allowed privilege escalation to root...due to improper memory management. That flaw had been in production for _12 years_. Sadly, this is not an isolated incident. That said, does manual memory management have a place in modern computing? Of course it does, as you rightly point out. So does assembly language. Rust came up in the context of this thread as a GC'd language, and it may be worth mentioning that Rust uses manual memory management; the language just introduces some facilities that make this safer. For instance, the concept of ownership is elevated to first-class status in Rust, and there are rules about taking references to things; when something's owner goes out of scope, it is "dropped", but the compiler statically enforces that there are no outstanding references to that thing. Regardless, when dealing with some resource it is often the programmer's responsibility to make sure that a suitable drop implementation exists. FWIW, I used to sit down the hall from a large subgroup of the Go developers; we usually ate lunch together. I know that many of them shared my opinion that Rust and Go are very complimentary. No one tool is right for all tasks. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From athornton at gmail.com Mon Feb 7 01:42:00 2022 From: athornton at gmail.com (Adam Thornton) Date: Sun, 6 Feb 2022 08:42:00 -0700 Subject: [COFF] Zig (was Re: more about Brian... [really Rust]) In-Reply-To: References: <202202011537.211FbYSe017204@freefriends.org> <20220201155225.5A9541FB21@orac.inputplus.co.uk> <202202020747.2127lTTh005669@freefriends.org> <7C19F93B-4F21-4BB1-A064-0307D3568DB7@cfcl.com> <1nFWmo-1Gn-00@marmaro.de> <202202040234.2142YeKN3307556@darkstar.fourwinds.com> Message-ID: Something Larry said right before the discussion moved over here, and a conversation I was having some in a different place last week: "languages with guard rails" I'm not gonna complain about them. Don't get me wrong: I love v6 and v7 Unix, and the Lions book is great, and I have enjoyed "int? Pointer? Why should I care?" plenty of times. I've done my share of Perl golfing. But on the other hand: languages with strong typing and happening-for-me memory management, while they may be more constraining if I'm going to be writing something new, of my own design, from a blank slate... Most of my career has been spent not doing that. A lot more of it has been being handed a pile of code that was written by someone who left the company a couple years before I got there, and being told to find out why it began breaking last week, and that the company is leaking money every time it breaks. And in that circumstance, I really don't want clever in the code I'm looking at. I don't want to have to figure out that some chunk of memory is a stealthily-declared union that usually holds a pointer to a character string except sometimes it holds an int whose value is meaningful. I want to be able to look at it and see, from the structure and the type annotations, what the intent of the code was, because when that's clear, it's usually a lot easier to figure out what's subtly wrong with the implementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Feb 16 07:42:43 2022 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 16 Feb 2022 08:42:43 +1100 (EST) Subject: [COFF] [TUHS] Lorinda Cherry (fwd) Message-ID: Sad news... -- Dave ---------- Forwarded message ---------- Date: Tue, 15 Feb 2022 16:17:04 -0500 From: John P. Linderman To: The Eunuchs Hysterical Society Subject: [TUHS] Lorinda Cherry I got his from a friend today (15 February): =========== I'm sorry to report that Lorinda passed away a few days ago.  I got a call from her sister today.  Apparently the dog walker hadn't seen her for a few days and called the police.  The police entered the house and found her there.  Her sister says they are assuming either a heart attack or a stroke. From will.senn at gmail.com Wed Feb 23 10:48:39 2022 From: will.senn at gmail.com (Will Senn) Date: Tue, 22 Feb 2022 18:48:39 -0600 Subject: [COFF] Tail-recursion was Re: [TUHS] Lorinda Cherry In-Reply-To: <20220222103948.1B0482206F@orac.inputplus.co.uk> References: <202202160754.21G7sbUa011318@freefriends.org> <1nKFRN-4IZ-00@marmaro.de> <8735kig8vb.fsf@vuxu.org> <4E3028A1-EC08-424A-B814-CC2AEEEAEC7B@iitbombay.org> <20220222103948.1B0482206F@orac.inputplus.co.uk> Message-ID: <52dac66b-3f10-fc5f-9325-a8f4f9bdcc99@gmail.com> My all time favorite presentation on tail-recursion: https://www.youtube.com/watch?v=-PX0BV9hGZY On 2/22/22 4:39 AM, Ralph Corderoy wrote: > Hi Otto, > >> MacOS uses the GNU implementation which has a long standing issue with >> deep recursion. It even cannot handle the tail recursive calls used >> here and will run out of its stack. > When learning dc and seeing it relied on tail calls, the first thing > I did was check it did tail-call elimination, and it did. That was > GNU dc. > > Trying just now, I see no growth in memory usage despite heavy CPU load > shown by TIME increasing. > > $ dc > !ps u `pidof dc` > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > ralph 11489 0.0 0.0 2332 1484 pts/1 S+ 10:33 0:00 dc > [lmx]smlmx > ^C > Interrupt! > !ps u `pidof dc` > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > ralph 11489 75.5 0.0 2332 1488 pts/1 S+ 10:33 0:46 dc > > The memory used remained at that level during the macro execution too, > watched from outside. > > Do you have more detail on what GNU dc can't handle? dc without > tail-call elimination is a bit crippled. > From sauer at technologists.com Wed Feb 23 11:45:20 2022 From: sauer at technologists.com (Charles H. Sauer (he/him)) Date: Tue, 22 Feb 2022 19:45:20 -0600 Subject: [COFF] Tail-recursion was Re: [TUHS] Lorinda Cherry In-Reply-To: <52dac66b-3f10-fc5f-9325-a8f4f9bdcc99@gmail.com> References: <202202160754.21G7sbUa011318@freefriends.org> <1nKFRN-4IZ-00@marmaro.de> <8735kig8vb.fsf@vuxu.org> <4E3028A1-EC08-424A-B814-CC2AEEEAEC7B@iitbombay.org> <20220222103948.1B0482206F@orac.inputplus.co.uk> <52dac66b-3f10-fc5f-9325-a8f4f9bdcc99@gmail.com> Message-ID: <12c08436-9cfe-6490-65b6-a50a2b60153c@technologists.com> An opportunity for Auto-Tune?? On 2/22/2022 6:48 PM, Will Senn wrote: > My all time favorite presentation on tail-recursion: > > https://www.youtube.com/watch?v=-PX0BV9hGZY -- voice: +1.512.784.7526 e-mail: sauer at technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/Twitter: CharlesHSauer From silas8642 at hotmail.co.uk Wed Feb 23 10:53:04 2022 From: silas8642 at hotmail.co.uk (silas poulson) Date: Wed, 23 Feb 2022 00:53:04 +0000 Subject: [COFF] Tail-recursion was Re: [TUHS] Lorinda Cherry In-Reply-To: <52dac66b-3f10-fc5f-9325-a8f4f9bdcc99@gmail.com> References: <202202160754.21G7sbUa011318@freefriends.org> <1nKFRN-4IZ-00@marmaro.de> <8735kig8vb.fsf@vuxu.org> <4E3028A1-EC08-424A-B814-CC2AEEEAEC7B@iitbombay.org> <20220222103948.1B0482206F@orac.inputplus.co.uk> <52dac66b-3f10-fc5f-9325-a8f4f9bdcc99@gmail.com> Message-ID: Yes! That’s such as fun presentation! For those who want the fast version, skip to 6:00 mark. Silas > On 23 Feb 2022, at 00:48, Will Senn wrote: > > My all time favorite presentation on tail-recursion: > > https://www.youtube.com/watch?v=-PX0BV9hGZY > > > On 2/22/22 4:39 AM, Ralph Corderoy wrote: >> Hi Otto, >> >>> MacOS uses the GNU implementation which has a long standing issue with >>> deep recursion. It even cannot handle the tail recursive calls used >>> here and will run out of its stack. >> When learning dc and seeing it relied on tail calls, the first thing >> I did was check it did tail-call elimination, and it did. That was >> GNU dc. >> >> Trying just now, I see no growth in memory usage despite heavy CPU load >> shown by TIME increasing. >> >> $ dc >> !ps u `pidof dc` >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> ralph 11489 0.0 0.0 2332 1484 pts/1 S+ 10:33 0:00 dc >> [lmx]smlmx >> ^C >> Interrupt! >> !ps u `pidof dc` >> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND >> ralph 11489 75.5 0.0 2332 1488 pts/1 S+ 10:33 0:46 dc >> >> The memory used remained at that level during the macro execution too, >> watched from outside. >> >> Do you have more detail on what GNU dc can't handle? dc without >> tail-call elimination is a bit crippled. >> > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff