From imp at bsdimp.com Fri Jun 3 12:09:55 2022 From: imp at bsdimp.com (Warner Losh) Date: Thu, 2 Jun 2022 20:09:55 -0600 Subject: [TUHS] Historical application software In-Reply-To: <9F17E4E7-37F3-43B7-A090-CEAFB2F51EDF@eschatologist.net> References: <64EEED76-2EBB-4D55-ADE4-DEDFAC391322@planet.nl> <66ae3ff2-bd07-e192-a00f-f9c701d857c8@spamtrap.tnetconsulting.net> <9F17E4E7-37F3-43B7-A090-CEAFB2F51EDF@eschatologist.net> Message-ID: On Thu, Jun 2, 2022, 8:00 PM Chris Hanson wrote: > On May 28, 2022, at 5:57 PM, Warner Losh wrote: > > > > HP-UX had a weird form of COFF in the early days. IBM AIX had its own > thing that wasn't quite COFF, nor was it quite a.out. Apollo also had a > variation on COFF that wasn't quite standard. I wrote a symbol mangler for > all of these in the early 90s and each one was its own special snowflake. > > HP initially used its own object file format for 32-bit PA-RISC, whether > running HP-UX or MPE. I believe it's still the format the ROM expects for > anything bootable, at least it is for my MPE-capable A400. > > IBM's COFF for AIX on POWER and PowerPC was XCOFF, which was also used as > the initial object file format (though not executable format) for the Power > Macintosh. Apple's Preferred Executable Format was essentially a mechanical > translation away from IBM's XCOFF; the initial toolchains produced .o files > and then a "final" binary in XCOFF format, and then ran a MakePEF tool on > that to produce the PEF binary for an executable or shared library. I > believe Be, due in part to their heritage and toolchains, also used PEF for > BeOS on PowerPC. > > And then there's the "b.out" format used by i960… > There were a number of b.out formats used by PC C compilers... Warner -- Chris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 08:19:45 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 16:19:45 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220603202330.f4spdxyn34uiyy5v@illithid> Message-ID: On Fri, Jun 3, 2022 at 3:21 PM Tom Ivar Helbekkmo via TUHS wrote: > Clem Cole writes: > > > Some of us on this list remember the original BDSi fight, the 386BSD > > to FreeBSD, then NetBSD and OpenBSD (I was friends with both sides of > > many of these wars). > > Irrelevant to the topic, I know, but I'd just like to point out, since > you call these things "wars", that NetBSD grew out of 386bsd in a quiet, > friendly fashion, and then FreeBSD out of NetBSD just as quietly. (BSDi > growing out of 386bsd was a completely separate affair that I know very > little about, and the OpenBSD fork from NetBSD was mostly just a > personal animosity thing, Theo de Raadt having made enemies in both the > NetBSD and FreeBSD camps -- but it has left no bad blood behind it.) > My recollection was that FreeBSD grew out of the patch kits in parallel to NetBSD growing out of the patch kits, but with the CVS repos hosted on the same host before each project got their own hosting... The CVS history shows FreeBSD started with NET/2 and then added in the patchkit changes added to it. I know that the family tree file says otherwise, but I've not seen convincing evidence that is really how things happened (either as an outsider observing at the time, nor via extant artifacts that would show such a relationship). NetBSD did ship their first release before FreeBSD, however. My recollection from the time of the collegiality of the split differs somewhat from yours, however. > In other words, no wars that I know of. > There were a number of shenanigans (like moving the license text to the end of files) at the time. And you never got a call at 4am from Theo demanding that you stop a FreeBSD user from saying bad things about OpenBSD... So maybe not "wars," as such, but it wasn't all sweetness and light... > That being said, I sincerely wish you all the best working out a > solution that can allow the amazingly good simh project to continue! > Yes. This looks nothing at all like the early BSD days. this looks to be a proactive attempt to take the sprawling number of forks that have happened and bring some order to it so that they don't proliferate too much. As a long-time open source governance wonk in the FreeBSD project, I like what I see. Warner > -tih > -- > Most people who graduate with CS degrees don't understand the significance > of Lisp. Lisp is the most important idea in computer science. --Alan Kay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 08:26:54 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 16:26:54 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: <20220603213215.GO10240@mcvoy.com> References: <20220603202330.f4spdxyn34uiyy5v@illithid> <20220603213215.GO10240@mcvoy.com> Message-ID: On Fri, Jun 3, 2022 at 3:32 PM Larry McVoy wrote: > On Fri, Jun 03, 2022 at 11:20:58PM +0200, Tom Ivar Helbekkmo via TUHS > wrote: > > Clem Cole writes: > > > > > Some of us on this list remember the original BDSi fight, the 386BSD > > > to FreeBSD, then NetBSD and OpenBSD (I was friends with both sides of > > > many of these wars). > > > > Irrelevant to the topic, I know, but I'd just like to point out, since > > you call these things "wars", that NetBSD grew out of 386bsd in a quiet, > > friendly fashion, and then FreeBSD out of NetBSD just as quietly. (BSDi > > growing out of 386bsd was a completely separate affair that I know very > > little about, and the OpenBSD fork from NetBSD was mostly just a > > personal animosity thing, Theo de Raadt having made enemies in both the > > NetBSD and FreeBSD camps -- but it has left no bad blood behind it.) > > > > In other words, no wars that I know of. > > Umm, were you there? I was a BSD guy before I turned to Linux and I > turned to Linux because of those wars. There is no good reason to have > {386,Free,Net,Open,DragonFly}bsd other than, as Linus stated, "Nobody > could decide who would drive the big red fire truck so now they each > have their own toy fire truck that they drive around". > > BSD would have won if there was a Linus for BSD. There was not so you > got all this replicated effort, the BSD community effectively divided > and conquered themselves. > > It was, and is, a train wreck. It's the poster child for how not to > manage a project. > > I did BitKeeper for Linus because he refused to use any crappy source > management solution and people like Dave Miller were threatening to > fork just so they had some solution. I did that because a forked Linux > would turn into the same mess of {386,Free,Net,Open,DragonFly}bsd which > is obviously not remotely close to ideal. Far from it. > 386BSD died because its founder couldn't deal with collaboration. He tried to be dictator and that failed because he didn't accept other people's collaboration out of worries he couldn't sell 386BSD. NetBSD and FreeBSD took up the charge for a free and open system. I'll agree it was unfortunate that there was a split since NetBSD focused on portability and FreeBSD focused on fastest possible i386/i486 code. I'd suggest, though, that the USL lawsuit cast a huge pall on things and introduced enough uncertainty to further derail things. Had it not been for that additional blow, things would have turned out differently. OpenBSD and Dragonfly BSD didn't split until years later and also represented differences of opinion on where to take the focus of the system (OpenBSD thought the NetBSD folks didn't take security seriously enough and the DFBSD folks thought the efforts to make a parallel kernel in FreeBSD were off track and should be done completely differently). > I lived through all of that, I was an active kernel developer at Sun, > SGI and elsewhere. I would have loved to have seen the SunOS VM system > ported to 4.x BSD and that been the default answer for a kernel. Instead > we got Linux, which has it's positive points for sure, but it also has > decided to let every feature imaginable into the kernel. > We wound up with MACH in BSD because when Sun tried to donate their VM code to Berkeley, the corporate lawyers said no. It was giving away too much shareholder value, and would result in a huge write-off which would, one would presume, negatively affect the stock price. Had this donation actually transpired, 386BSD would have had a bigger advantage from the get go... Oh well Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 08:33:00 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 16:33:00 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220603202330.f4spdxyn34uiyy5v@illithid> <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> Message-ID: On Fri, Jun 3, 2022 at 4:17 PM Tom Ivar Helbekkmo via TUHS wrote: > Larry McVoy writes: > > >> I do not agree. Linux won because BSD was embroiled in litigation. > > > > Like I said, we experienced that differently. In my opinion, people lean > > on the litigation excuse when they don't want to admit that *BSD was not > > a good way to do operating system development. > > What were the differences? The BSD projects were: > > - 386bsd: run by Jolitz, with no input from anyone else > - NetBSD: forked from 386bsd, run by Chris de Metriou as a > cooperative effort between a host of indviduals (me included) > - FreeBSD: forked from NetBSD almost immediately, by a group of > contributors who felt that performance and device support on the Intel > platform was more important than maintaining hardware portability > The FreeBSD 1.x CVS tree shows that it started from NET/2 with the patchkit added on. It didn't start from the NetBSD tree that I've been able to find (and I've studied the early CVS history for the git migration extensively). And oral history from many of the founders who were also patchkit contributors also matches this recounting... Though I guess a lot turns on whether you consider the patchkit early NetBSD or not... I do agree with the rest of this, though. > - OpenBSD: forked from NetBSD after de Raadt established a kind of > record by being kicked off both the NetBSD and FreeBSD mailing lists. > OpenBSD forked from NetBSD after Theo had a personality dispute with the NetBSD folks. It had little to do with the FreeBSD lists judging from his email at the time and my early interactions with that project. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 08:52:52 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 16:52:52 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: <20220603223014.GS10240@mcvoy.com> References: <20220603202330.f4spdxyn34uiyy5v@illithid> <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> <20220603223014.GS10240@mcvoy.com> Message-ID: On Fri, Jun 3, 2022 at 4:30 PM Larry McVoy wrote: > On Sat, Jun 04, 2022 at 12:16:49AM +0200, Tom Ivar Helbekkmo wrote: > > Larry McVoy writes: > > > > >> I do not agree. Linux won because BSD was embroiled in litigation. > > > > > > Like I said, we experienced that differently. In my opinion, people > lean > > > on the litigation excuse when they don't want to admit that *BSD was > not > > > a good way to do operating system development. > > > > What were the differences? The BSD projects were: > > > > - 386bsd: run by Jolitz, with no input from anyone else > > - NetBSD: forked from 386bsd, run by Chris de Metriou as a > > cooperative effort between a host of indviduals (me included) > > - FreeBSD: forked from NetBSD almost immediately, by a group of > > contributors who felt that performance and device support on the Intel > > platform was more important than maintaining hardware portability > > - OpenBSD: forked from NetBSD after de Raadt established a kind of > > record by being kicked off both the NetBSD and FreeBSD mailing lists. > > > > I'm open to contradicting arguments, but I do feel that the BSD platform > > was a much better starting point back then, and ought to have won - but > > Linux, while inferior, was available and non-threatening. > > Dude, I was there. Jolitz used to work for me at Sun, Theo's Sun 4/470 > was given to him by me, I know most of the players. > > I agree BSD was a better starting point if there was one BSD. > > The problem is there was {386,Net,Free,Open,DragonFly}BSD where there > should have just been "BSD". One, not a bunch. > Except from 1993-1996 there were only two of those BSDs. NetBSD and FreeBSD forked in 1993 due to the inability of the patchkit to adequately cover the problems in 386BSD governance. OpenBSD didn't fork until late 1995 or early 1996 depending on when you count such things (Theo's firey email, or the first release). Drangonfly BSD didn't fork until a decade later in 2004 due to a dispute in how to make FreeBSD's kernel SMP. And 386BSD stopped being a thing in 1993 when Jolitz disappeared from public view and NetBSD/FreeBSD filled the free vacuum that created and BSDi with BSD/386 filled the commercial space. > Where do you think Linux would be if there was {A,B,C,D,E,F,G}Linux? > There is one kernel. One and only one. With everyone working on that > one kernel. > Except there never really was only one kernel. There have been hundreds of forks of the Linux kernel over the years. Most of them have been commercial of some flavor (Redhat, Debian, OpenSUSE, MontaVista, WindRiver, Android etc) had hundreds or thousands of patches on the base Linux kernel for a long time and trying to move from one to another if you also had patches was a nightmare. Kernel.org has kept going, and many of the chanages from these systems were lost. Some were not as good as what came in upstream, while others were encumbered by commercial contracts that made them unappealing to upstream. True, many of them did wind up in kernel.org, but to say there aren't forks in Linux is stretching reality a bit... > If you can't see the difference, I don't know what to tell you. Are you > seriously going to take the position that BSD is better off because > it has all these variants and replicated effort? Because if you are, > this conversation is over, at least from my point of view. > I think Linux's greatest strengths were the different distributions, though at times it causes a great deal of duplicated effort. They allowed different communities the room to customize things in an easy way. I believe that, more than one kernel, has been a driver of innovation. But honestly, the litigation was a deal killer for many BSD users in the early days, and that gave Linux room to grow. Had the BSDs not faced the competition from Linux and had similar resources poured into them, the NetBSD/FreeBSD split would have been good competition, much as there's good competition between Debian, Redhat, Suse, Canonical, etc today in the Linux space which helps to drive innovation. Even today, with the benefit of hindsight, it's hard to pin which of these facts on the ground was the biggest driver for most people... Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 08:56:51 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 16:56:51 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220603202330.f4spdxyn34uiyy5v@illithid> <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> Message-ID: On Fri, Jun 3, 2022 at 4:40 PM Tom Ivar Helbekkmo wrote: > Warner Losh writes: > > > The FreeBSD 1.x CVS tree shows that it started from NET/2 with the > > patchkit added on. It didn't start from the NetBSD tree that I've been > > able to find (and I've studied the early CVS history for the git > > migration extensively). > > Yeah, I guess it might be better to say that after Chris took the > initiative to create a fork, which he named NetBSD, of Jolitz's 386bsd, > it was decided that there would be two forks; NetBSD and FreeBSD, with > slightly differing objectives. > Yea, there were a number of folks that contributed to the patchkit as well. Chris did a good thing to try to bring order to that chaos, there is no doubt, but Nate Williams, Paul Richards and others were big contributors to the patchkit and the lines of what happened where were somewhat blurry at the time as people discovered they were working at cross purposes and/or had issues working with some people.... Thankfully, for the most part the high levels of animosity that developed in the early 90s between the BSD projects have largely faded away as new groups of developers have joined the projects... Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 4 10:10:46 2022 From: imp at bsdimp.com (Warner Losh) Date: Fri, 3 Jun 2022 18:10:46 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: <20220603234822.GV10240@mcvoy.com> References: <20220603202330.f4spdxyn34uiyy5v@illithid> <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> <20220603223014.GS10240@mcvoy.com> <20220603234822.GV10240@mcvoy.com> Message-ID: On Fri, Jun 3, 2022, 5:48 PM Larry McVoy wrote: > On Fri, Jun 03, 2022 at 04:52:52PM -0600, Warner Losh wrote: > > > The problem is there was {386,Net,Free,Open,DragonFly}BSD where there > > > should have just been "BSD". One, not a bunch. > > > > > > > Except from 1993-1996 there were only two of those BSDs. NetBSD and > FreeBSD > > forked in 1993 due to the inability of the patchkit to adequately cover > the > > problems > > in 386BSD governance. > > Um, so there were 3: 386, Net and Free. That's already 2 too many. > No. 386BSD died before then. > > Where do you think Linux would be if there was {A,B,C,D,E,F,G}Linux? > > > There is one kernel. One and only one. With everyone working on that > > > one kernel. > > > > Except there never really was only one kernel. There have been hundreds > > of forks of the Linux kernel over the years. Most of them have been > > commercial > > of some flavor (Redhat, Debian, OpenSUSE, MontaVista, WindRiver, Android > > etc) > > had hundreds or thousands of patches on the base Linux kernel for a long > > time > > and trying to move from one to another if you also had patches was a > > nightmare. > > So I had a successful commercial product that ran on all of those variants > without issue. I supported linux on everything from ARM to IBM's z-system > mainframes and all the arches inbetween. I think I have one #ifdef SPARC > in there because there was a cache flush bug but that was a hardware issue, > not a software issue. > > I also supported {Free,Net,Open}BSD and I had way more problems with them > than I did with Linux. > > > Kernel.org has kept going, and many of the chanages from these systems > were > > lost. > > Some were not as good as what came in upstream, while others were > encumbered > > by commercial contracts that made them unappealing to upstream. True, > many > > of > > them did wind up in kernel.org, but to say there aren't forks in Linux > is > > stretching > > reality a bit... > > There is one kernel development stream that matters. RedHat knows that > if they don't get their stuff into Linus' tree, they have a nightmare > on their hands. That's why RedHat paid so many of the kernel developers. > > Sure, there are forks, but there is one tree that matters, and that is > Linus' tree. You can't say that about BSD and that is the problem in > it's entirety. If I want to change BSD, which one? > By your standards, only FreeBSD matters... so that's easy.. but you already said Redhat is all that matters... and that kernel differs somewhat from Linus'. Ditto if you are dealing with Android... it's not just one Linux and never has been. Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Mon Jun 6 11:02:33 2022 From: imp at bsdimp.com (Warner Losh) Date: Sun, 5 Jun 2022 19:02:33 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> <20220603223014.GS10240@mcvoy.com> <20220603234822.GV10240@mcvoy.com> <20220604010543.GZ10240@mcvoy.com> Message-ID: On Sun, Jun 5, 2022 at 6:32 PM Theodore Ts'o wrote: > On Fri, Jun 03, 2022 at 06:05:43PM -0700, Larry McVoy wrote: > > > > So in all of this, the thing that keeps getting missed is Linux won. > > And it didn't win because of the lawsuit, it didn't get crushed by the > > GPL that all the BSD people hate so much, it won on merits. And it > > won because there was one Linux kernel. You could, and I have many, > > many times, just clone the latest kernel and compile and install on > > any distribution. It worked. The same was never true for all the BSD > > variants. Tell me about the times you cloned the OpenBSD kernel and > > it worked on FreeBSD. I'm maybe sure there was maybe one point in time > > where that worked but then for all the other points in time it didn't. > > So in essence what you're saying is that OpenBSD and FreeBSD weren't > ABI compatible, and that's your definition of when two OS's are > different. And so when Warner says that there are hundreds of forks > of the Linux kernel, to the extent that the ABI's are compatible, they > really aren't "different". > Yes. The forks might not have been bad, and there's always some logical fallacy presented to say they are somehow the "same" because of some artificial criteria. And I'm not convinced all the forks are bad, per se. Just that the narrative that says there's only one is false and misleading in some ways because there always was a diversity of 'add in' patches for different distributions, both commercial and hobbyist... Much, but by no means all, of that wound up upstream, though not always the best and the reasons for rejection could be arbitrary at times, or just made too hard to bother trying to upstream at others. There was something in the diversity, though, that I'll readily admit was beneficial. > Part of this comes from the the fact that the Linux kernel, C library, > and core utilities are all shipped separately. The BSDs have often > criticized this, claiming that shipping all of the OS in a single > source control system makes it easier to rollout new features. There > is no doubt upsides from having a single source tree; but one of the > advantages of keeping things separate is that definition of the kernel > <-> userspace interface is much more explicit. > > That being said, I will note that this always hasn't been true. There > was a brief period where an early Red Hat Enterprise Linux version > suffered from the "legacy Unix value-add disease", where Red Hat had > added some kernel changes that impacted kernel interfaces, which > didn't make it upstream, or made it upstream with a changed interface, > such that when users wanted to use a newer upstream kernel, which had > newer features, and newer device driver support, it wouldn't work with > that version RHEL. Red Hat has criticized *heavily* for that, both by > the upstream development community and by its users, and since then it > has stuck to a "usptream first" policy, especially where new system > calls, or some other kernel interface is concerned. > I suffered through MontaVista Linux which definitely wasn't ABI compatible. And all of their board support packages were based on different versions of Linux, making it a nightmare to support lots of architectures... > One of the reasons why that early RHEL experience kept Red Hat in line > was because none of the other Linux distributions had that property > --- and because the core development in upstream hadn't slacked off, > so there was a strong desire to upgrade to newer kernels on RHEL, and > when that didn't worked, not only did that make customers and > developers upset, but it also made life difficult for Red Hat > engineers, since they now need to figure out how to forward port their > "value add" changes onto the latest and greatest kernel release. > > > An interesting question is if CSRG had been actively pushing the state > of the art foreward, would that have provided sufficient centripetal > force to keep the HP/UX, SunOS, DG/UX, etc., from spintering? After > all, it's natural to want to get a competitive advantage over your > competition by adding new features --- this is what I call the "Legacy > Unix value-add disease". But if you can't keep up with the upstream > developments, that provides a strong disincentive from making > permanent forks. For that matter, why was it that successive new > releases of AT&T System V wasn't able to play a similar role? Was it > because the rate of change was too slow? Was it because applications > weren't compatible anyway due to ISA differences? I don't know.... > CSRG's funding dried up when the DARPA work was over. And even before it was over, CSRG was more an academic group than one who had a desire to impose its will on commercial groups that it had no leverage over. And AT&T had become all about monetization of unix, which meant it imposed new terms that were unfavorable, making it harder for old-time licensees to justify pulling in the new code that would have kept the world from Balkanizing as badly as it did. So there were complex issues at play here as well. > One other dynamic might be the whole worse is better is worse debate. > As an example of this, Linux had PCMCIA support at least a year or two > before NetBSD did, and in particular Linux had hot-add support where > you could insert an ethernet PCMCIA into your laptop after the OS had > booted, and the ethernet card would work. However, if you ejected the > ethernet card, there was a roughly 1 in 4 chance that your system > would crash. NetBSD took a lot longer to get PCMCIA support --- but > when it did, it had hot-add and hot-remove working perfectly, while > Linux took a year or two more after that point before hot-remove was > solidly reliable. > Except FreeBSD's PAO project had PCMCIA support about two years before NetBSD did, and hot plug worked on it too.. So that's a bit of an apples to oranges comparison. To be fair, the main FreeBSD project was slow to take up changes from PAO and that set back PC Card and CardBus support a number of years. > So from a computer science point of view, one could argue that NetBSD > was "better", and that Linux had a whole bunch of hacks, and some > might even argue was written by a bunch of hacks. :-) However, from > the user's perspective, who Just Wanted Their Laptop To Work, the fact > that Linux had some kind of rough PCMCIA support first mattered a lot > more than a "we will ship no code before its time" attitude. And > some of those users would become developers, which would cause a > positive feedback loop. > At the time, though, FreeBSD ran on the busiest FTP server on the internet could handle quite a bit more load than an equivalent Linux box at the time. And NetBSD was much more in the "no code before its time" camp than FreeBSD, which tried to get things out faster and often did a good job at that. Though it did well with networking, it didn't do so well with PC Card, so it's rather a mixed bag. The only reason I keep replying to this thread is that the simple narratives that people keep repeating often times aren't so simple and the factors going into things tend to be much more complex and nuanced. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Mon Jun 6 12:15:44 2022 From: imp at bsdimp.com (Warner Losh) Date: Sun, 5 Jun 2022 20:15:44 -0600 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: On Sun, Jun 5, 2022, 7:40 PM Warren Toomey via TUHS wrote: > Hi all, we have a new addition to the Unix Archive at: > https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ > > This is the documentation for Unix 4.0 which preceded System V. The > documents were provided by Arnold Robbins and scanned in by Matt Gilmore. > Shiny. New things to read... never thought we'd see this... Must resist the urge to ask about boot tapes. :) Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davida at pobox.com Tue Jun 7 09:28:14 2022 From: davida at pobox.com (David Arnold) Date: Tue, 7 Jun 2022 09:28:14 +1000 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220603213215.GO10240@mcvoy.com> <20220603214032.GQ10240@mcvoy.com> <20220603223014.GS10240@mcvoy.com> <20220603234822.GV10240@mcvoy.com> <20220604010543.GZ10240@mcvoy.com> <20220606011511.GG10240@mcvoy.com> Message-ID: > On 7 Jun 2022, at 00:53, Henry Bent wrote: > > On Sun, 5 Jun 2022 at 21:41, Dan Cross > wrote: > > The other day, I needed a Linux machine for work. I grabbed another > NUC and put Arch on it. A vastly different experience: much more akin > to installing 7th Edition than Windows or macOS. Oh! And I missed a > step, so I had to pull some shenanigans to fix that. > > Gentoo is even more arcane, but that's essentially an "I want to do everything myself" distribution. I suppose my point is that there exist a full range of distributions, from the truly masochistic Linux From Scratch to the most hands-off/static ChromeOS Flex. I don't believe that any other "OS" has such a wide range of offerings. This is obviously both a wonderful feature and a confusing nightmare, depending on your audience. Lest it be thought that all is sweetness and light in Linux-land, there were years of fairly intense competition involved in getting installers to the point that you can start with a downloaded image, burn it to a USB, boot it, run it, and (optionally) make it persist over a reboot, all with very minimal need to understand or care about the many, many things going on under the hood. More recently, installation has become more-or-less settled technology (and so things like Arch have arisen that specialise away from that experience), and there’s increasing competition around the end-user experience. Distributions like ChromeOS or https://elementary.io/ or (from the BSD world!) https://hellosystem.github.io/, are attempting to provide a more seamless user experience than the standard GNOME-or-KDE duopoly that has until recently focused on being competitive with decades old Windows/macOS. Perhaps that’s something OpenSIMH could take from this history: a focus on painless installation and a decent UI! d -------------- next part -------------- An HTML attachment was scrubbed... URL: From tytso at mit.edu Wed Jun 8 00:30:14 2022 From: tytso at mit.edu (Theodore Ts'o) Date: Tue, 7 Jun 2022 10:30:14 -0400 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: Message-ID: On Tue, Jun 07, 2022 at 09:28:14AM +1000, David Arnold wrote: > Lest it be thought that all is sweetness and light in Linux-land, > there were years of fairly intense competition involved in getting > installers to the point that you can start with a downloaded image, > burn it to a USB, boot it, run it, and (optionally) make it persist > over a reboot, all with very minimal need to understand or care > about the many, many things going on under the hood. On Sun, Jun 05, 2022 at 09:40:44PM -0400, Dan Cross wrote: > > But every distribution has its own installer, and they vary wildly. The key is I think *competition*. Distributions were competing to attract a user base, and one of the ways they could do that was by improving the install experience. There were people who reviewed distributions based on which one had the better installer, and that helped users who were Windows refugees choose the ones that had the better installer. The other advantages of having a many distributions is that gave more people to opportunity to exercise leadership --- you can "drive the big red firetruck" by founding a distro like Debian or Slackware, and the people who are interested in improving a distribution can be different from those who drive kernel development. This is one of the things that I've learned from my rector at my church, who had a background in community organizing. One of the big differences between community organizing compared to the corporate world is that it's more important to give more people --- volunteers --- opportunities to contribute, and very often this is far more important than efficiently organizing a corporate-style team to get some job done. Was it inefficient to have multiple teams competing on installer development, and release engineering? Sure, but it also drew more people into the Linux ecosystem. > The ABI compatibility thing breaks down, too. A colleague was trying > to get the host-side of a Salae logic analyzer working on Arch, and it > doesn't. They more or less require Ubuntu 18.something, and that's > not what he runs. As far as most end-users are concerned, your > distribution of choice is "Linux", and distributions vary in all kinds of > ways. There are three different things that's worth separating. One is a consistent kernel<->user space interface, this is what Linus Torvalds considers high priority when he says, "Thou shalt not break userspace". This is what allows pretty much all distributions to replace the kernel that was shipped with the distribution with the latest upstream kernel. And this is something that in general doesn't work with *BSD systems. The second is application source-level compatibility, and this is what allows you to download some open source application, and recompile it on different Linux distributions, and it should Just Work. In practice this works for most Linux and *BSD users. And the third is application *binary* level compatibility. And this is what is important if you have some binary that you've downloaded, or some commerical application which you've purchased, and you want to run it on Linux distribution different from the one which is originally designed. Static linking solves most of the problems, but if the user needs to use proprietary/commercial binaries, if they stick to RHEL, Fedora, Ubuntu/Debian, they will generally not have issues. - Ted From crossd at gmail.com Wed Jun 8 01:08:34 2022 From: crossd at gmail.com (Dan Cross) Date: Tue, 7 Jun 2022 11:08:34 -0400 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: Message-ID: On Tue, Jun 7, 2022 at 10:30 AM Theodore Ts'o wrote: > On Tue, Jun 07, 2022 at 09:28:14AM +1000, David Arnold wrote: > > Lest it be thought that all is sweetness and light in Linux-land, > > there were years of fairly intense competition involved in getting > > installers to the point that you can start with a downloaded image, > > burn it to a USB, boot it, run it, and (optionally) make it persist > > over a reboot, all with very minimal need to understand or care > > about the many, many things going on under the hood. > > On Sun, Jun 05, 2022 at 09:40:44PM -0400, Dan Cross wrote: > > > > But every distribution has its own installer, and they vary wildly. > > The key is I think *competition*. Distributions were competing to > attract a user base, and one of the ways they could do that was by > improving the install experience. There were people who reviewed > distributions based on which one had the better installer, and that > helped users who were Windows refugees choose the ones that had the > better installer. My point is that this is something that varies from distro to distro; it is therefore inaccurate to claim that "Linux solved it" since many different distros that have widely varying installation processes fall under the very large "Linux" umbrella. > The other advantages of having a many distributions is that gave more > people to opportunity to exercise leadership --- you can "drive the > big red firetruck" by founding a distro like Debian or Slackware, and > the people who are interested in improving a distribution can be > different from those who drive kernel development. This is one of the > things that I've learned from my rector at my church, who had a > background in community organizing. One of the big differences > between community organizing compared to the corporate world is that > it's more important to give more people --- volunteers --- > opportunities to contribute, and very often this is far more important > than efficiently organizing a corporate-style team to get some job > done. Was it inefficient to have multiple teams competing on > installer development, and release engineering? Sure, but it also > drew more people into the Linux ecosystem. That's an interesting angle and one that I think bears more on the topic at hand than many folks are willing to let on: the barrier to contribution is, in a lot of important ways, lower in the Linux ecosystem than it is in the BSD world. At least historically speaking, and perhaps still true. Anecdotally, I was able to get a patch into the KVM unit tests (not precisely Linux but related) in pretty short order recently while the OpenBSD people simply ignored my problem report and patch. YMMV. > > The ABI compatibility thing breaks down, too. A colleague was trying > > to get the host-side of a Salae logic analyzer working on Arch, and it > > doesn't. They more or less require Ubuntu 18.something, and that's > > not what he runs. As far as most end-users are concerned, your > > distribution of choice is "Linux", and distributions vary in all kinds of > > ways. > > There are three different things that's worth separating. One is a > consistent kernel<->user space interface, this is what Linus Torvalds > considers high priority when he says, "Thou shalt not break > userspace". This is what allows pretty much all distributions to > replace the kernel that was shipped with the distribution with the > latest upstream kernel. And this is something that in general doesn't > work with *BSD systems. Eh? I feel like I can upgrade the kernel on the various BSDs without binaries breaking pretty easily. Then again, there _have_ been times when there were flag days that required rebuilding the world; but surely externalities are more common here (e.g., switching from one ISA to another). > The second is application source-level compatibility, and this is what > allows you to download some open source application, and recompile it > on different Linux distributions, and it should Just Work. In > practice this works for most Linux and *BSD users. This, I think, is where things break down. Simply put, the way people build applications has changed, and "source-level" compatibility means compatibility with a bunch of third-party libraries; in many ways the kernel interfaces matter much, much less (many of which are defined by externally imposed standards anyway). If a distro ships a too-old or too-new version of the dependency, then the open source thing will often not build, and for most end users, this is a distinction without a difference. > And the third is application *binary* level compatibility. And this > is what is important if you have some binary that you've downloaded, > or some commerical application which you've purchased, and you want to > run it on Linux distribution different from the one which is > originally designed. Static linking solves most of the problems, but > if the user needs to use proprietary/commercial binaries, if they > stick to RHEL, Fedora, Ubuntu/Debian, they will generally not have > issues. Yup. But then that you're running Linux is mostly immaterial; it could be Windows and the same would be true. - Dan C. From lm at mcvoy.com Wed Jun 8 01:25:19 2022 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 7 Jun 2022 08:25:19 -0700 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: Message-ID: <20220607152519.GN15041@mcvoy.com> On Tue, Jun 07, 2022 at 11:08:34AM -0400, Dan Cross wrote: > On Tue, Jun 7, 2022 at 10:30 AM Theodore Ts'o wrote: > > The key is I think *competition*. Distributions were competing to > > attract a user base, and one of the ways they could do that was by > > improving the install experience. There were people who reviewed > > distributions based on which one had the better installer, and that > > helped users who were Windows refugees choose the ones that had the > > better installer. > > My point is that this is something that varies from distro to distro; > it is therefore inaccurate to claim that "Linux solved it" since many > different distros that have widely varying installation processes > fall under the very large "Linux" umbrella. Yeah, there are a large number of distros but I'm willing to bet that Debian, RedHat and Ubuntu variants account for the vast majority of installs. > > There are three different things that's worth separating. One is a > > consistent kernel<->user space interface, this is what Linus Torvalds > > considers high priority when he says, "Thou shalt not break > > userspace". This is what allows pretty much all distributions to > > replace the kernel that was shipped with the distribution with the > > latest upstream kernel. And this is something that in general doesn't > > work with *BSD systems. > > Eh? I feel like I can upgrade the kernel on the various BSDs > without binaries breaking pretty easily. Then again, there _have_ > been times when there were flag days that required rebuilding > the world; but surely externalities are more common here (e.g., > switching from one ISA to another). Try installing an OpenBSD kernel on FreeBSD, that's what we mean by compat. I'm more than willing to believe that you can pull head on the FreeBSD source tree and build & install it on FreeBSD. Much less willing to believe that that works Open/Free or Net/Free. With Linux, on pretty much any distro, you can pull Linus' tree and build and install it without drama. If you are running some ancient release you might have to update your toolchain but that's about it. Linus is super careful to not break the syscall table. It's extend only, which makes it a mess, but a binary compat mess. > > The second is application source-level compatibility, and this is what > > allows you to download some open source application, and recompile it > > on different Linux distributions, and it should Just Work. In > > practice this works for most Linux and *BSD users. > > This, I think, is where things break down. Simply put, the way > people build applications has changed, and "source-level" > compatibility means compatibility with a bunch of third-party > libraries; in many ways the kernel interfaces matter much, much > less (many of which are defined by externally imposed standards > anyway). If a distro ships a too-old or too-new version of the > dependency, then the open source thing will often not build, and > for most end users, this is a distinction without a difference. Yes, you are correct, I've experienced that as well with sort of newer complex apps. From rich.salz at gmail.com Wed Jun 8 01:55:22 2022 From: rich.salz at gmail.com (Richard Salz) Date: Tue, 7 Jun 2022 11:55:22 -0400 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: Message-ID: > > Lest it be thought that all is sweetness and light in Linux-land, I don't think anyone has thought or said that about Linux ever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Wed Jun 8 02:03:41 2022 From: will.senn at gmail.com (Will Senn) Date: Tue, 7 Jun 2022 11:03:41 -0500 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: <20220607152519.GN15041@mcvoy.com> References: <20220607152519.GN15041@mcvoy.com> Message-ID: Interesting crossover from Linux to Linux Distros... Debian's my personal fave (in the form of Mint, MX, or even Ubuntu), mostly cuz apt seems to just work (for me, ymmv) and rpm sucks :). However, all of these run on the same kernel and generally provide the same userland (core-utils, etc). I constantly switch from Mac (Mojave) to FreeBSD to Linux (Mint, usually, but occassionally opensuse) and other than minor differences in switches, they are all reminiscent of v7, from a user perspective, outside of GUIs. Except for ZFS, which will keep FreeBSD in my environment until it's added to the Linux Kernel - is hell freezing over yet? -w On 6/7/22 10:25 AM, Larry McVoy wrote: > On Tue, Jun 07, 2022 at 11:08:34AM -0400, Dan Cross wrote: >> On Tue, Jun 7, 2022 at 10:30 AM Theodore Ts'o wrote: >>> The key is I think *competition*. Distributions were competing to >>> attract a user base, and one of the ways they could do that was by >>> improving the install experience. There were people who reviewed >>> distributions based on which one had the better installer, and that >>> helped users who were Windows refugees choose the ones that had the >>> better installer. >> My point is that this is something that varies from distro to distro; >> it is therefore inaccurate to claim that "Linux solved it" since many >> different distros that have widely varying installation processes >> fall under the very large "Linux" umbrella. > Yeah, there are a large number of distros but I'm willing to bet that > Debian, RedHat and Ubuntu variants account for the vast majority of > installs. > >>> There are three different things that's worth separating. One is a >>> consistent kernel<->user space interface, this is what Linus Torvalds >>> considers high priority when he says, "Thou shalt not break >>> userspace". This is what allows pretty much all distributions to >>> replace the kernel that was shipped with the distribution with the >>> latest upstream kernel. And this is something that in general doesn't >>> work with *BSD systems. >> Eh? I feel like I can upgrade the kernel on the various BSDs >> without binaries breaking pretty easily. Then again, there _have_ >> been times when there were flag days that required rebuilding >> the world; but surely externalities are more common here (e.g., >> switching from one ISA to another). > Try installing an OpenBSD kernel on FreeBSD, that's what we mean by > compat. I'm more than willing to believe that you can pull head on > the FreeBSD source tree and build & install it on FreeBSD. Much less > willing to believe that that works Open/Free or Net/Free. > > With Linux, on pretty much any distro, you can pull Linus' tree and > build and install it without drama. If you are running some ancient > release you might have to update your toolchain but that's about it. > Linus is super careful to not break the syscall table. It's extend > only, which makes it a mess, but a binary compat mess. > >>> The second is application source-level compatibility, and this is what >>> allows you to download some open source application, and recompile it >>> on different Linux distributions, and it should Just Work. In >>> practice this works for most Linux and *BSD users. >> This, I think, is where things break down. Simply put, the way >> people build applications has changed, and "source-level" >> compatibility means compatibility with a bunch of third-party >> libraries; in many ways the kernel interfaces matter much, much >> less (many of which are defined by externally imposed standards >> anyway). If a distro ships a too-old or too-new version of the >> dependency, then the open source thing will often not build, and >> for most end users, this is a distinction without a difference. > Yes, you are correct, I've experienced that as well with sort of > newer complex apps. From imp at bsdimp.com Wed Jun 8 02:38:57 2022 From: imp at bsdimp.com (Warner Losh) Date: Tue, 7 Jun 2022 10:38:57 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: On Tue, Jun 7, 2022, 9:03 AM Will Senn wrote: > Interesting crossover from Linux to Linux Distros... Debian's my > personal fave (in the form of Mint, MX, or even Ubuntu), mostly cuz apt > seems to just work (for me, ymmv) and rpm sucks :). However, all of > these run on the same kernel and generally provide the same userland > (core-utils, etc). But not the same binaries. I've run into a lot of issues trying to run a binary for Debian on red hat or vice Vera due to shared libraries not being completely compatible... kinda makes the whole system call argument moot since there is always significant version skew... Warner I constantly switch from Mac (Mojave) to FreeBSD to > Linux (Mint, usually, but occassionally opensuse) and other than minor > differences in switches, they are all reminiscent of v7, from a user > perspective, outside of GUIs. Except for ZFS, which will keep FreeBSD in > my environment until it's added to the Linux Kernel - is hell freezing > over yet? > > -w > > > > On 6/7/22 10:25 AM, Larry McVoy wrote: > > On Tue, Jun 07, 2022 at 11:08:34AM -0400, Dan Cross wrote: > >> On Tue, Jun 7, 2022 at 10:30 AM Theodore Ts'o wrote: > >>> The key is I think *competition*. Distributions were competing to > >>> attract a user base, and one of the ways they could do that was by > >>> improving the install experience. There were people who reviewed > >>> distributions based on which one had the better installer, and that > >>> helped users who were Windows refugees choose the ones that had the > >>> better installer. > >> My point is that this is something that varies from distro to distro; > >> it is therefore inaccurate to claim that "Linux solved it" since many > >> different distros that have widely varying installation processes > >> fall under the very large "Linux" umbrella. > > Yeah, there are a large number of distros but I'm willing to bet that > > Debian, RedHat and Ubuntu variants account for the vast majority of > > installs. > > > >>> There are three different things that's worth separating. One is a > >>> consistent kernel<->user space interface, this is what Linus Torvalds > >>> considers high priority when he says, "Thou shalt not break > >>> userspace". This is what allows pretty much all distributions to > >>> replace the kernel that was shipped with the distribution with the > >>> latest upstream kernel. And this is something that in general doesn't > >>> work with *BSD systems. > >> Eh? I feel like I can upgrade the kernel on the various BSDs > >> without binaries breaking pretty easily. Then again, there _have_ > >> been times when there were flag days that required rebuilding > >> the world; but surely externalities are more common here (e.g., > >> switching from one ISA to another). > > Try installing an OpenBSD kernel on FreeBSD, that's what we mean by > > compat. I'm more than willing to believe that you can pull head on > > the FreeBSD source tree and build & install it on FreeBSD. Much less > > willing to believe that that works Open/Free or Net/Free. > > > > With Linux, on pretty much any distro, you can pull Linus' tree and > > build and install it without drama. If you are running some ancient > > release you might have to update your toolchain but that's about it. > > Linus is super careful to not break the syscall table. It's extend > > only, which makes it a mess, but a binary compat mess. > > > >>> The second is application source-level compatibility, and this is what > >>> allows you to download some open source application, and recompile it > >>> on different Linux distributions, and it should Just Work. In > >>> practice this works for most Linux and *BSD users. > >> This, I think, is where things break down. Simply put, the way > >> people build applications has changed, and "source-level" > >> compatibility means compatibility with a bunch of third-party > >> libraries; in many ways the kernel interfaces matter much, much > >> less (many of which are defined by externally imposed standards > >> anyway). If a distro ships a too-old or too-new version of the > >> dependency, then the open source thing will often not build, and > >> for most end users, this is a distinction without a difference. > > Yes, you are correct, I've experienced that as well with sort of > > newer complex apps. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Jun 8 02:45:53 2022 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 7 Jun 2022 09:45:53 -0700 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: <20220607164553.GU15041@mcvoy.com> On Tue, Jun 07, 2022 at 10:38:57AM -0600, Warner Losh wrote: > On Tue, Jun 7, 2022, 9:03 AM Will Senn wrote: > > > Interesting crossover from Linux to Linux Distros... Debian's my > > personal fave (in the form of Mint, MX, or even Ubuntu), mostly cuz apt > > seems to just work (for me, ymmv) and rpm sucks :). However, all of > > these run on the same kernel and generally provide the same userland > > (core-utils, etc). > > > But not the same binaries. I've run into a lot of issues trying to run a > binary for Debian on red hat or vice Vera due to shared libraries not being > completely compatible... kinda makes the whole system call argument moot > since there is always significant version skew... Yep, shared libraries can screw you but that's true anywhere. -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From blake1024 at gmail.com Wed Jun 8 02:46:53 2022 From: blake1024 at gmail.com (Blake McBride) Date: Tue, 7 Jun 2022 11:46:53 -0500 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: On Tue, Jun 7, 2022 at 11:39 AM Warner Losh wrote: > > But not the same binaries. I've run into a lot of issues trying to run a > binary for Debian on red hat or vice Vera due to shared libraries not being > completely compatible... kinda makes the whole system call argument moot > since there is always significant version skew... > That's why God created static linking. Blake -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Wed Jun 8 02:57:15 2022 From: imp at bsdimp.com (Warner Losh) Date: Tue, 7 Jun 2022 10:57:15 -0600 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: <20220607164553.GU15041@mcvoy.com> References: <20220607152519.GN15041@mcvoy.com> <20220607164553.GU15041@mcvoy.com> Message-ID: On Tue, Jun 7, 2022, 9:45 AM Larry McVoy wrote: > On Tue, Jun 07, 2022 at 10:38:57AM -0600, Warner Losh wrote: > > On Tue, Jun 7, 2022, 9:03 AM Will Senn wrote: > > > > > Interesting crossover from Linux to Linux Distros... Debian's my > > > personal fave (in the form of Mint, MX, or even Ubuntu), mostly cuz apt > > > seems to just work (for me, ymmv) and rpm sucks :). However, all of > > > these run on the same kernel and generally provide the same userland > > > (core-utils, etc). > > > > > > But not the same binaries. I've run into a lot of issues trying to run a > > binary for Debian on red hat or vice Vera due to shared libraries not > being > > completely compatible... kinda makes the whole system call argument moot > > since there is always significant version skew... > > Yep, shared libraries can screw you but that's true anywhere. > Kinda my point: you brag of a misleading compatibility and then attack others that decide to slice things up differently and don't have that, these days useless, talking point. Warner -- > --- > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cowan at ccil.org Wed Jun 8 02:57:59 2022 From: cowan at ccil.org (John Cowan) Date: Tue, 7 Jun 2022 12:57:59 -0400 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: <202206060920.2569KNx6016227@freefriends.org> References: <20220606022655.GI10240@mcvoy.com> <202206060920.2569KNx6016227@freefriends.org> Message-ID: On Mon, Jun 6, 2022 at 5:20 AM wrote: > There's a lot of stuff there that's familiar, straight from V7. > But yes, there's also a lot of stuff that's unique to USG Unix of the time. > As a non-insider, here's what I see that's unfamiliar: In Volume 1: - -mv macros for viewgraphs and slides - the *full* C reference manual (oopsie!) without the "late K&R" addendum - make(1) with E.G. Bradford's changes - the sdb(1) debugger In Volume 2: - an SCCS front end (not the same as the BSD one) - a bunch of graphics commands - ged(1g), a graphics editor - stat, tools for analyzing data - vpm, the Virtual Protocol Machine for outboard comms - Unix RJE - Stand-Alone I/O Library for bare-metal programs - Equipment Test Package -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.winalski at gmail.com Wed Jun 8 03:00:40 2022 From: paul.winalski at gmail.com (Paul Winalski) Date: Tue, 7 Jun 2022 13:00:40 -0400 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: On 6/7/22, Warner Losh wrote: > > But not the same binaries. I've run into a lot of issues trying to run a > binary for Debian on red hat or vice Vera due to shared libraries not being > completely compatible... kinda makes the whole system call argument moot > since there is always significant version skew... Part of my last job was to maintain a suite of software development and testing tools for our product across three different operating system platforms: Windows, Mac OS X, and Linux. The suite had to run on several versions of four or five Linux distributions. It is all user mode, unprivileged code. Windows and OS X rarely had problems with upward compatibility (newer versions able to run executables built for older versions), but Linux was living hell. Shared library compatibility was the biggest problem. Not only were shared libraries often incompatible between different Linux distributions, they were sometimes incompatible even between different versions of the same distribution. The problem of keeping shared libraries upward compatible from release to release was solved circa 1975 by the engineers who designed the VAX/VMS ABI. If not before that. It's not rocket science, but it does require a degree of discipline, care, and attention to detail when adding new or incompatible changes to an existing library. That bit of developer culture seems to be absent from Linux and the pieces of GNU that supply Linux's fundamental libraries (libc, etc.). To bring this back closer to TUHS, I don't know if the Unix distributions that support shared libraries suffer from the same problem. -Paul W. From lm at mcvoy.com Wed Jun 8 03:05:34 2022 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 7 Jun 2022 10:05:34 -0700 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> <20220607164553.GU15041@mcvoy.com> Message-ID: <20220607170534.GW15041@mcvoy.com> On Tue, Jun 07, 2022 at 10:57:15AM -0600, Warner Losh wrote: > On Tue, Jun 7, 2022, 9:45 AM Larry McVoy wrote: > > > On Tue, Jun 07, 2022 at 10:38:57AM -0600, Warner Losh wrote: > > > On Tue, Jun 7, 2022, 9:03 AM Will Senn wrote: > > > > > > > Interesting crossover from Linux to Linux Distros... Debian's my > > > > personal fave (in the form of Mint, MX, or even Ubuntu), mostly cuz apt > > > > seems to just work (for me, ymmv) and rpm sucks :). However, all of > > > > these run on the same kernel and generally provide the same userland > > > > (core-utils, etc). > > > > > > > > > But not the same binaries. I've run into a lot of issues trying to run a > > > binary for Debian on red hat or vice Vera due to shared libraries not > > being > > > completely compatible... kinda makes the whole system call argument moot > > > since there is always significant version skew... > > > > Yep, shared libraries can screw you but that's true anywhere. > > > > Kinda my point: you brag of a misleading compatibility and then attack > others that decide to slice things up differently and don't have that, > these days useless, talking point. > > Warner OK, Warner knower of all things, I'm sure you are right. It's not like I've done the things I've talked about, I'm actually an AI bot programmed to annoy you. From paul.winalski at gmail.com Wed Jun 8 03:26:01 2022 From: paul.winalski at gmail.com (Paul Winalski) Date: Tue, 7 Jun 2022 13:26:01 -0400 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: On 6/7/22, Blake McBride wrote: (regarding shared library incompatibility between Linux versions) > > That's why God created static linking. I assume you're being at least partly facetious. Maintaining upward compatibility for shared libraries has been a solved problem for about 50 years. Many OSes other than Linux do/have solved the problem. There's no excuse for it other than laziness or ignorance. -Paul W. From g.branden.robinson at gmail.com Wed Jun 8 05:32:33 2022 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Tue, 7 Jun 2022 14:32:33 -0500 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: <20220606022655.GI10240@mcvoy.com> <202206060920.2569KNx6016227@freefriends.org> Message-ID: <20220607193233.3qh2lu3hzpa42zcj@illithid> At 2022-06-07T12:57:59-0400, John Cowan wrote: > As a non-insider, here's what I see that's unfamiliar: > > In Volume 1: [...] > - the *full* C reference manual (oopsie!) without the "late K&R" addendum By that addendum do you mean the "Recent Changes to C" 1-page memo dated 1978-11-15 that appears with some copies of Seventh Edition Unix documentation? For those who don't have it handy, it documents structure assignment and introduces enum types. Or is there another piece of samizdat I should keep an eye out for? :) Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From blake1024 at gmail.com Wed Jun 8 06:09:19 2022 From: blake1024 at gmail.com (Blake McBride) Date: Tue, 7 Jun 2022 15:09:19 -0500 Subject: [TUHS] Fwd: [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: On Tue, Jun 7, 2022 at 12:26 PM Paul Winalski wrote: > On 6/7/22, Blake McBride wrote: > (regarding shared library incompatibility between Linux versions) > > > > That's why God created static linking. > > I assume you're being at least partly facetious. Maintaining upward > compatibility for shared libraries has been a solved problem for about > 50 years. Many OSes other than Linux do/have solved the problem. > There's no excuse for it other than laziness or ignorance. > > -Paul W. > And there is the rub - laziness or ignorance. Unlike closed systems like Windows and macOS, it is harder to enforce rules with so many random developers. Further, Linux has so many developers that it changes far more often. Blake -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Wed Jun 8 08:26:13 2022 From: tuhs at tuhs.org (Warren Toomey via TUHS) Date: Wed, 8 Jun 2022 08:26:13 +1000 Subject: [TUHS] Please move the SIMH discussion to COFF Message-ID: Hi all, I think it's time to move the (no longer) SIMH discussion over to the COFF mailing list. The S/N ratio is dropping and we are straying too far away from the Unix side of things. Many thanks! Warren From cmhanson at eschatologist.net Wed Jun 8 09:41:06 2022 From: cmhanson at eschatologist.net (Chris Hanson) Date: Tue, 7 Jun 2022 16:41:06 -0700 Subject: [TUHS] [simh] Announcing the Open SIMH project In-Reply-To: References: <20220607152519.GN15041@mcvoy.com> Message-ID: <931C40D7-F780-454F-B51B-747C713CD31C@eschatologist.net> On Jun 7, 2022, at 10:00 AM, Paul Winalski wrote: > > Windows and OS X rarely had problems with upward compatibility (newer > versions able to run executables built for older versions), but Linux > was living hell. Shared library compatibility was the biggest > problem. Not only were shared libraries often incompatible between > different Linux distributions, they were sometimes incompatible even > between different versions of the same distribution. That's because, at least when it comes to macOS (nee OS X, nee Mac OS X, nee OPENSTEP/Mach, nee NEXTSTEP in various capitalizations) we treat treat binary compatibility as something for the operating system as a whole to maintain, not just the kernel or the kernel userspace. Linux's ABI compatibility is itself kind of bare-bones; it achieves userspace compatibility by having a fixed set of system call numbers with well-specified calling sequences that get compiled into every binary for a particular architecture, and it doesn't even attempt to provide the kernel ABI compatibility needed by commercial driver vendors. We handle userspace ABI compatibility in macOS by actually putting the syscalls on the other side of a shared library (libSystem.dylib) so the calling sequences and syscall numbers are entirely hidden from userspace. We've historically taken a different approach to kernel ABI compatibility but have provided it too, though not to the same extent as userspace. As an example of where this helps, things like Linux-derived containers would be much faster on non-Linux platforms if the container system could swap in its own "libsyscall.so" instead of having to run atop a VM of some sort to handle the Linux syscall traps. -- Chris From silas8642 at hotmail.co.uk Thu Jun 9 09:15:55 2022 From: silas8642 at hotmail.co.uk (silas poulson) Date: Wed, 8 Jun 2022 23:15:55 +0000 Subject: [TUHS] blast from the past In-Reply-To: <39D6E93C-B6CD-444B-B320-93FA7060E7D7@humeweb.com> References: <39D6E93C-B6CD-444B-B320-93FA7060E7D7@humeweb.com> Message-ID: Ah, an excellent reminder! I tend to watch it every now and again to enthuse myself Silas On 6 Jun 2022, at 16:43, Andrew Hume > wrote: this is an old video, new to me, but i’m sure others on this list have seen it. its a little long, but has al aho, jon bentley, bjarne, ken&denis, plan 9 amongst others. https://www.youtube.com/watch?v=IFfdnFOiXUU&t=2s -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfawcus+lists-tuhs at employees.org Fri Jun 10 08:19:22 2022 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Thu, 9 Jun 2022 23:19:22 +0100 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: <20220607193233.3qh2lu3hzpa42zcj@illithid> References: <20220606022655.GI10240@mcvoy.com> <202206060920.2569KNx6016227@freefriends.org> <20220607193233.3qh2lu3hzpa42zcj@illithid> Message-ID: On Tue, Jun 07, 2022 at 02:32:33PM -0500, G. Branden Robinson wrote: > > By that addendum do you mean the "Recent Changes to C" 1-page memo dated > 1978-11-15 that appears with some copies of Seventh Edition Unix > documentation? > > For those who don't have it handy, it documents structure assignment and > introduces enum types. > > Or is there another piece of samizdat I should keep an eye out for? :) Have a look here: https://www.bell-labs.com/usr/dmr/www/cchanges.pdf DF From egbegb2 at gmail.com Fri Jun 10 16:47:43 2022 From: egbegb2 at gmail.com (Ed Bradford) Date: Fri, 10 Jun 2022 01:47:43 -0500 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: Hi Warren, Thank you for the amazing Unix documetation. Do you know if there is a source code for SCCS anywhere on the net? Ed Bradford On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS wrote: > Hi all, we have a new addition to the Unix Archive at: > https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ > > This is the documentation for Unix 4.0 which preceded System V. The > documents were provided by Arnold Robbins and scanned in by Matt Gilmore. > > Cheers, Warren > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Fri Jun 10 17:31:48 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 10 Jun 2022 01:31:48 -0600 Subject: [TUHS] Source code for SCCS (was Re: Re: Documentation for Unix 4.0) In-Reply-To: References: Message-ID: <202206100731.25A7Vm29022762@freefriends.org> The GNU project has CSSC, which is an SCCS clone. HTH, Arnold Ed Bradford wrote: > Hi Warren, > > Thank you for the amazing Unix documetation. > Do you know if there is a source code for SCCS anywhere on the net? > > Ed Bradford > > On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS wrote: > > > Hi all, we have a new addition to the Unix Archive at: > > https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ > > > > This is the documentation for Unix 4.0 which preceded System V. The > > documents were provided by Arnold Robbins and scanned in by Matt Gilmore. > > > > Cheers, Warren > > > > > -- > Advice is judged by results, not by intentions. > Cicero From m at mbsks.franken.de Fri Jun 10 18:38:17 2022 From: m at mbsks.franken.de (Matthias Bruestle) Date: Fri, 10 Jun 2022 10:38:17 +0200 Subject: [TUHS] Source code for SCCS (was Re: Re: Documentation for Unix 4.0) In-Reply-To: <202206100731.25A7Vm29022762@freefriends.org> References: <202206100731.25A7Vm29022762@freefriends.org> Message-ID: On Fri, Jun 10, 2022 at 01:31:48AM -0600, arnold at skeeve.com wrote: > The GNU project has CSSC, which is an SCCS clone. I had a look what SCCS. I know it now, found CSSC, but also http://sccs.sourceforge.net/ and that Wikipedia points me to https://publications.opengroup.org/ as the official repository, where I don't find anything about SCCS. Matthias -- When You Find Out Your Normal Daily Lifestyle Is Called Quarantine From arnold at skeeve.com Fri Jun 10 20:03:47 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 10 Jun 2022 04:03:47 -0600 Subject: [TUHS] Source code for SCCS (was Re: Re: Documentation for Unix 4.0) In-Reply-To: References: <202206100731.25A7Vm29022762@freefriends.org> Message-ID: <202206101003.25AA3lie013752@freefriends.org> Matthias Bruestle wrote: > On Fri, Jun 10, 2022 at 01:31:48AM -0600, arnold at skeeve.com wrote: > > The GNU project has CSSC, which is an SCCS clone. > > I had a look what SCCS. I know it now, found CSSC, but also > http://sccs.sourceforge.net/ That is what you're looking for, the source to SCCS. > and that Wikipedia points me to > https://publications.opengroup.org/ as the official repository, > where I don't find anything about SCCS. That site has the POSIX standards which describe how SCCS is supposed to work, not the source code for it. HTH, Arnold From clemc at ccc.com Sat Jun 11 00:22:40 2022 From: clemc at ccc.com (Clem Cole) Date: Fri, 10 Jun 2022 10:22:40 -0400 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz In the directory: sys/source/sccs4 The man pages are in the same archive but mixed with the rest of the commands in usr/man/man* That said, there is Gnu version of same written C++ if IIRC: https://www.gnu.org/software/cssc/ And there's more ... but I'll Larry offer details here other than point out his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a more modern implementation still] ᐧ On Fri, Jun 10, 2022 at 2:48 AM Ed Bradford wrote: > Hi Warren, > > Thank you for the amazing Unix documetation. > Do you know if there is a source code for SCCS anywhere on the net? > > Ed Bradford > > On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS > wrote: > >> Hi all, we have a new addition to the Unix Archive at: >> https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ >> >> This is the documentation for Unix 4.0 which preceded System V. The >> documents were provided by Arnold Robbins and scanned in by Matt Gilmore. >> >> Cheers, Warren >> > > > -- > Advice is judged by results, not by intentions. > Cicero > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cowan at ccil.org Sat Jun 11 05:24:13 2022 From: cowan at ccil.org (John Cowan) Date: Fri, 10 Jun 2022 15:24:13 -0400 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: On Fri, Jun 10, 2022 at 10:23 AM Clem Cole wrote: That said, there is Gnu version of same written C++ if IIRC: > https://www.gnu.org/software/cssc/ > ... which stands for Compatibly Stupid Source Control. > > And there's more ... > > In particular, the Heirloom Toolkit at < https://sourceforge.net/projects/heirloom/> contains the Solaris version of SCCS, and SRC source control provides a modern-style front-end over either RCS or SCCS; it is designed for maintaining single files, possibly in the same directory, without entangling their versioning. -------------- next part -------------- An HTML attachment was scrubbed... URL: From egbegb2 at gmail.com Sat Jun 11 14:34:58 2022 From: egbegb2 at gmail.com (Ed Bradford) Date: Fri, 10 Jun 2022 23:34:58 -0500 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: Thank you, Clem. You link didn't work but the other information on cssc worked fine. Ed On Fri, Jun 10, 2022 at 9:23 AM Clem Cole wrote: > The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be > found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz > In the directory: sys/source/sccs4 > The man pages are in the same archive but mixed with the rest of the > commands in usr/man/man* > > That said, there is Gnu version of same written C++ if IIRC: > https://www.gnu.org/software/cssc/ > > And there's more ... but I'll Larry offer details here other than point > out his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a > more modern implementation still] > ᐧ > > On Fri, Jun 10, 2022 at 2:48 AM Ed Bradford wrote: > >> Hi Warren, >> >> Thank you for the amazing Unix documetation. >> Do you know if there is a source code for SCCS anywhere on the net? >> >> Ed Bradford >> >> On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS >> wrote: >> >>> Hi all, we have a new addition to the Unix Archive at: >>> https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ >>> >>> This is the documentation for Unix 4.0 which preceded System V. The >>> documents were provided by Arnold Robbins and scanned in by Matt Gilmore. >>> >>> Cheers, Warren >>> >> >> >> -- >> Advice is judged by results, not by intentions. >> Cicero >> >> -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Sat Jun 11 20:09:17 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sat, 11 Jun 2022 12:09:17 +0200 Subject: [TUHS] Source code for SCCS Message-ID: It would seem that a Spinellis-like exercise for SCCS is possible: PWB1.0 (1978): https://www.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/source/sccs4 SysIII (1980): https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd/sccs SysVr1 (1983): https://www.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/src/cmd/sccs SysVr2 (1984): https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr2-vax/src/cmd/sccs SysVr3 (1987): https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr3/301/usr/src/cmd/sccs SysVr4 (1988): https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr4/svr4/cmd/sccs Ultrix3.1 (1988): https://www.tuhs.org/cgi-bin/utree.pl?file=Ultrix-3.1/src/cmd/sccs I did not find SCCS sources included with the BSD sources on TUHS, but there is a front-end “sccs” command. For sure, SCCS was used for BSD development. Kirk McKusick’s DVD has a directory "CSRG/historic1/sccscmds”, but I did not look into this further. From here the trail probably continues with Solaris, GNU and Bitmover -- all very much outside my timeframe of research. Paul > The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be > found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz > In the directory: sys/source/sccs4 > The man pages are in the same archive but mixed with the rest of the > commands in usr/man/man* > > That said, there is Gnu version of same written C++ if IIRC: > https://www.gnu.org/software/cssc/ > > And there's more ... but I'll Larry offer details here other than point out > his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a > more modern implementation still] From clemc at ccc.com Sun Jun 12 00:43:59 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 11 Jun 2022 10:43:59 -0400 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: Your welcome. What do you mean by the first link did not work. It’s a tarball that has to be decoded and then look inside for the original code. It’s there. I just tried it. On Sat, Jun 11, 2022 at 12:35 AM Ed Bradford wrote: > Thank you, Clem. You link didn't work but the other information on cssc > worked fine. > > Ed > > > On Fri, Jun 10, 2022 at 9:23 AM Clem Cole wrote: > >> The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be >> found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz >> In the directory: sys/source/sccs4 >> The man pages are in the same archive but mixed with the rest of the >> commands in usr/man/man* >> >> That said, there is Gnu version of same written C++ if IIRC: >> https://www.gnu.org/software/cssc/ >> >> And there's more ... but I'll Larry offer details here other than point >> out his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a >> more modern implementation still] >> ᐧ >> >> On Fri, Jun 10, 2022 at 2:48 AM Ed Bradford wrote: >> >>> Hi Warren, >>> >>> Thank you for the amazing Unix documetation. >>> Do you know if there is a source code for SCCS anywhere on the net? >>> >>> Ed Bradford >>> >>> On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS >>> wrote: >>> >>>> Hi all, we have a new addition to the Unix Archive at: >>>> https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ >>>> >>>> This is the documentation for Unix 4.0 which preceded System V. The >>>> documents were provided by Arnold Robbins and scanned in by Matt >>>> Gilmore. >>>> >>>> Cheers, Warren >>>> >>> >>> >>> -- >>> Advice is judged by results, not by intentions. >>> Cicero >>> >>> > > -- > Advice is judged by results, not by intentions. > Cicero > > -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From egbegb2 at gmail.com Sun Jun 12 15:45:51 2022 From: egbegb2 at gmail.com (Ed Bradford) Date: Sun, 12 Jun 2022 00:45:51 -0500 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: [image: image.png] On Sat, Jun 11, 2022 at 9:44 AM Clem Cole wrote: > Your welcome. What do you mean by the first link did not work. It’s a > tarball that has to be decoded and then look inside for the original code. > It’s there. I just tried it. > > On Sat, Jun 11, 2022 at 12:35 AM Ed Bradford wrote: > >> Thank you, Clem. You link didn't work but the other information on cssc >> worked fine. >> >> Ed >> >> >> On Fri, Jun 10, 2022 at 9:23 AM Clem Cole wrote: >> >>> The original Marc Rochchild/John Mashey and team code from PWB 1.0 can >>> be found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz >>> In the directory: sys/source/sccs4 >>> The man pages are in the same archive but mixed with the rest of the >>> commands in usr/man/man* >>> >>> That said, there is Gnu version of same written C++ if IIRC: >>> https://www.gnu.org/software/cssc/ >>> >>> And there's more ... but I'll Larry offer details here other than point >>> out his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a >>> more modern implementation still] >>> ᐧ >>> >>> On Fri, Jun 10, 2022 at 2:48 AM Ed Bradford wrote: >>> >>>> Hi Warren, >>>> >>>> Thank you for the amazing Unix documetation. >>>> Do you know if there is a source code for SCCS anywhere on the net? >>>> >>>> Ed Bradford >>>> >>>> On Sun, Jun 5, 2022 at 8:40 PM Warren Toomey via TUHS >>>> wrote: >>>> >>>>> Hi all, we have a new addition to the Unix Archive at: >>>>> https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ >>>>> >>>>> This is the documentation for Unix 4.0 which preceded System V. The >>>>> documents were provided by Arnold Robbins and scanned in by Matt >>>>> Gilmore. >>>>> >>>>> Cheers, Warren >>>>> >>>> >>>> >>>> -- >>>> Advice is judged by results, not by intentions. >>>> Cicero >>>> >>>> >> >> -- >> Advice is judged by results, not by intentions. >> Cicero >> >> -- > Sent from a handheld expect more typos than usual > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 188213 bytes Desc: not available URL: From tuhs at tuhs.org Sun Jun 12 16:41:00 2022 From: tuhs at tuhs.org (Warren Toomey via TUHS) Date: Sun, 12 Jun 2022 16:41:00 +1000 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: On Sat, Jun 11, 2022 at 12:35 AM Ed Bradford <[2]egbegb2 at gmail.com> wrote: > Thank you, Clem. You link didn't work but the other information on cssc > worked fine. All, it turns out that this was my fault. I'd moved the A record for www.tuhs.org over to the new IP address, but I hadn't moved the A record for tuhs.org over to that new IP address. I've just done so, but it will take a while for the DNS records to proagate. Apologies! Warren From tuhs at tuhs.org Mon Jun 13 23:18:24 2022 From: tuhs at tuhs.org (Jay Logue via TUHS) Date: Mon, 13 Jun 2022 06:18:24 -0700 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: Message-ID: <165512630943.1470.17753618581265860679@minnie.tuhs.org> On 6/11/2022 11:41 PM, Warren Toomey via TUHS wrote: > All, it turns out that this was my fault. I'd moved the A record for > www.tuhs.org over to the new IP address, but I hadn't moved the A record > for tuhs.org over to that new IP address. Maybe make www.tuhs.org a CNAME for tuhs.org? --Jay From norman at oclsc.org Tue Jun 14 01:49:38 2022 From: norman at oclsc.org (Norman Wilson) Date: Mon, 13 Jun 2022 11:49:38 -0400 (EDT) Subject: [TUHS] Documentation for Unix 4.0 Message-ID: <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> Maybe make www.tuhs.org a CNAME for tuhs.org? Surely a site devoted to the history of UNIX should use a real link, not a symbolic one. Norman `Old Fart' Wilson Toronto ON From michael at kjorling.se Tue Jun 14 02:39:13 2022 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Mon, 13 Jun 2022 16:39:13 +0000 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> References: <165512630943.1470.17753618581265860679@minnie.tuhs.org> <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> Message-ID: <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> On 13 Jun 2022 11:49 -0400, from norman at oclsc.org (Norman Wilson): >> Maybe make www.tuhs.org a CNAME for tuhs.org? > > Surely a site devoted to the history of UNIX should use a > real link, not a symbolic one. Surely a site that aims to collect information should have a single canonical name, not multiple ones that lead to the same content on the same host. I would suggest to pick either www.tuhs.org or tuhs.org as the HTTP hostname, and make the other redirect to the first (or remove HTTP service from the not-chosen one entirely) only so as to not break existing links from elsewhere. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From stu at remphrey.net Wed Jun 15 11:53:08 2022 From: stu at remphrey.net (Stuart Remphrey) Date: Wed, 15 Jun 2022 09:53:08 +0800 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> References: <165512630943.1470.17753618581265860679@minnie.tuhs.org> <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> Message-ID: Yes, tuhs.org:80 &443 could permanently redirect to www.tuhs.org so browsers update to the full canonical name (assuming that's the desired name). Though I think Norman was drawing an analogy between A-records/hard links and CNAME/symlinks, then observing that prior to 4.2BSD in 1983 there were no symlinks only hard links, ditto CNAMEs in RFC-882, also 1983. So if we're going back further, we shouldn't use them (it breaks down a little when considering A-records though, since we can't easily not use those!) On Tue, 14 Jun 2022, 00:44 Michael Kjörling, wrote: > On 13 Jun 2022 11:49 -0400, from norman at oclsc.org (Norman Wilson): > >> Maybe make www.tuhs.org a CNAME for tuhs.org? > > > > Surely a site devoted to the history of UNIX should use a > > real link, not a symbolic one. > > Surely a site that aims to collect information should have a single > canonical name, not multiple ones that lead to the same content on the > same host. > > I would suggest to pick either www.tuhs.org or tuhs.org as the HTTP > hostname, and make the other redirect to the first (or remove HTTP > service from the not-chosen one entirely) only so as to not break > existing links from elsewhere. > > -- > Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se > “Remember when, on the Internet, nobody cared that you were a dog?” > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stu at remphrey.net Wed Jun 15 15:15:12 2022 From: stu at remphrey.net (Stuart Remphrey) Date: Wed, 15 Jun 2022 13:15:12 +0800 Subject: [TUHS] Source code for SCCS In-Reply-To: References: Message-ID: We used RCS & some SCCS for Pyramid DC/OSx (mid/late 80s?) I've no idea whether Pyramid sources survive anywhere: if anyone's aware of any, I'd be very interested to know... On Sat, 11 Jun 2022, 18:09 Paul Ruizendaal, wrote: > It would seem that a Spinellis-like exercise for SCCS is possible: > > PWB1.0 (1978): > https://www.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/source/sccs4 > SysIII (1980): > https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd/sccs > SysVr1 > > (1983): https://www.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/src/cmd/sccs > SysVr2 > > (1984): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr2-vax/src/cmd/sccs > SysVr3 > > (1987): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr3/301/usr/src/cmd/sccs > SysVr4 > > (1988): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr4/svr4/cmd/sccs > Ultrix3.1 (1988): > https://www.tuhs.org/cgi-bin/utree.pl?file=Ultrix-3.1/src/cmd/sccs > > I did not find SCCS sources included with the BSD sources on TUHS, but > there is a front-end “sccs” command. For sure, SCCS was used for BSD > development. Kirk McKusick’s DVD has a directory "CSRG/historic1/sccscmds”, > but I did not look into this further. > > From here the trail probably continues with Solaris, GNU and Bitmover -- > all very much outside my timeframe of research. > > Paul > > > The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be > > found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz > > In the directory: sys/source/sccs4 > > The man pages are in the same archive but mixed with the rest of the > > commands in usr/man/man* > > > > That said, there is Gnu version of same written C++ if IIRC: > > https://www.gnu.org/software/cssc/ > > > > And there's more ... but I'll Larry offer details here other than point > out > > his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a > > more modern implementation still] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stu at remphrey.net Wed Jun 15 15:32:16 2022 From: stu at remphrey.net (Stuart Remphrey) Date: Wed, 15 Jun 2022 13:32:16 +0800 Subject: [TUHS] Source code for SCCS (was Re: Re: Documentation for Unix 4.0) In-Reply-To: <202206101003.25AA3lie013752@freefriends.org> References: <202206100731.25A7Vm29022762@freefriends.org> <202206101003.25AA3lie013752@freefriends.org> Message-ID: > https://publications.opengroup.org/ > as the official repository, > where I don't find anything about SCCS Pubs.OpenGroup links to their unix.org site, which amongst others includes the Commands & Utilities std, though not the code, which may include SCCS (I didn't check further): https://unix.org/version4/xcu_contents.html If the SCCS commands are standardised there, I expect (hope?) the file format is also specified in one of those docs, for file interchange compatibility. (Ongoing maintenance is handled by the Austin Group, also linked from there) On Fri, 10 Jun 2022, 18:04 , wrote: > Matthias Bruestle wrote: > > > On Fri, Jun 10, 2022 at 01:31:48AM -0600, arnold at skeeve.com wrote: > > > The GNU project has CSSC, which is an SCCS clone. > > > > I had a look what SCCS. I know it now, found CSSC, but also > > http://sccs.sourceforge.net/ > > That is what you're looking for, the source to SCCS. > > > and that Wikipedia points me to > > https://publications.opengroup.org/ as the official repository, > > where I don't find anything about SCCS. > > That site has the POSIX standards which describe how SCCS is > supposed to work, not the source code for it. > > HTH, > > Arnold > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Wed Jun 15 15:57:38 2022 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Wed, 15 Jun 2022 05:57:38 +0000 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: References: <165512630943.1470.17753618581265860679@minnie.tuhs.org> <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> Message-ID: <393a28d5-98d7-44f1-908d-e70d0e8790db@home.arpa> On 15 Jun 2022 09:53 +0800, from stu at remphrey.net (Stuart Remphrey): > Though I think Norman was drawing an analogy between A-records/hard links > and CNAME/symlinks, then observing that prior to 4.2BSD in 1983 there were > no symlinks only hard links, ditto CNAMEs in RFC-882, also 1983. > So if we're going back further, we shouldn't use them (it breaks down a > little when considering A-records though, since we can't easily not use > those!) By much that same line of reasoning, tuhs.org shouldn't have any MX records either because the MX RRtype was introduced as recently as in 1986 (Wikipedia puts it at RFCs 973 and 974 [1] but without wide use until "in the early 1990s"). Let alone a web presence because HTTP and HTML came along even later. :-) [1] https://en.wikipedia.org/wiki/MX_record#Historical_background -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From egbegb2 at gmail.com Wed Jun 15 16:21:46 2022 From: egbegb2 at gmail.com (Ed Bradford) Date: Wed, 15 Jun 2022 01:21:46 -0500 Subject: [TUHS] Source code for SCCS In-Reply-To: References: Message-ID: Thanks Paul. Of the 7 links you provided, the first one and the last two work. The others fail. Ed Bradford Pflugerville, TX On Sat, Jun 11, 2022 at 5:09 AM Paul Ruizendaal wrote: > It would seem that a Spinellis-like exercise for SCCS is possible: > > PWB1.0 (1978): > https://www.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/source/sccs4 > SysIII (1980): > https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/cmd/sccs > SysVr1 > > (1983): https://www.tuhs.org/cgi-bin/utree.pl?file=pdp11v/usr/src/cmd/sccs > SysVr2 > > (1984): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr2-vax/src/cmd/sccs > SysVr3 > > (1987): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr3/301/usr/src/cmd/sccs > SysVr4 > > (1988): > https://github.com/ryanwoodsmall/oldsysv/tree/master/sysvr4/svr4/cmd/sccs > Ultrix3.1 (1988): > https://www.tuhs.org/cgi-bin/utree.pl?file=Ultrix-3.1/src/cmd/sccs > > I did not find SCCS sources included with the BSD sources on TUHS, but > there is a front-end “sccs” command. For sure, SCCS was used for BSD > development. Kirk McKusick’s DVD has a directory "CSRG/historic1/sccscmds”, > but I did not look into this further. > > From here the trail probably continues with Solaris, GNU and Bitmover -- > all very much outside my timeframe of research. > > Paul > > > The original Marc Rochchild/John Mashey and team code from PWB 1.0 can be > > found: http://tuhs.org/Archive/Distributions/USDL/spencer_pwb.tar.gz > > In the directory: sys/source/sccs4 > > The man pages are in the same archive but mixed with the rest of the > > commands in usr/man/man* > > > > That said, there is Gnu version of same written C++ if IIRC: > > https://www.gnu.org/software/cssc/ > > > > And there's more ... but I'll Larry offer details here other than point > out > > his: http://www.bitmover.com/bitsccs/ [which is of BitKeeper] is a > > more modern implementation still] > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Wed Jun 15 22:04:07 2022 From: crossd at gmail.com (Dan Cross) Date: Wed, 15 Jun 2022 08:04:07 -0400 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: <393a28d5-98d7-44f1-908d-e70d0e8790db@home.arpa> References: <165512630943.1470.17753618581265860679@minnie.tuhs.org> <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> <393a28d5-98d7-44f1-908d-e70d0e8790db@home.arpa> Message-ID: On Wed, Jun 15, 2022, 1:57 AM Michael Kjörling wrote: > On 15 Jun 2022 09:53 +0800, from stu at remphrey.net (Stuart Remphrey): > > Though I think Norman was drawing an analogy between A-records/hard links > > and CNAME/symlinks, then observing that prior to 4.2BSD in 1983 there > were > > no symlinks only hard links, ditto CNAMEs in RFC-882, also 1983. > > So if we're going back further, we shouldn't use them (it breaks down a > > little when considering A-records though, since we can't easily not use > > those!) > > By much that same line of reasoning, tuhs.org shouldn't have any MX > records either because the MX RRtype was introduced as recently as in > 1986 (Wikipedia puts it at RFCs 973 and 974 [1] but without wide use > until "in the early 1990s"). Let alone a web presence because HTTP and > HTML came along even later. :-) > > [1] https://en.wikipedia.org/wiki/MX_record#Historical_background This definitely feels like it extends the joke a tad too far. :-) - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Thu Jun 16 17:45:18 2022 From: tuhs at tuhs.org (Tom Ivar Helbekkmo via TUHS) Date: Thu, 16 Jun 2022 09:45:18 +0200 Subject: [TUHS] Documentation for Unix 4.0 In-Reply-To: (Dan Cross's message of "Wed, 15 Jun 2022 08:04:07 -0400") References: <165512630943.1470.17753618581265860679@minnie.tuhs.org> <8845BDF5A20243F3CD8C296C36066BB3.for-standards-violators@oclsc.org> <54cd67b2-78f3-4ad3-a3c0-d0b885e67970@home.arpa> <393a28d5-98d7-44f1-908d-e70d0e8790db@home.arpa> Message-ID: Dan Cross writes: > This definitely feels like it extends the joke a tad too far. :-) Nah, not until we start demanding ..!minnie!tuhs as the list address. -tih (who has two 11/23 systems with PWB 1.0 connected via UUCP) -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay From robpike at gmail.com Fri Jun 17 09:06:16 2022 From: robpike at gmail.com (Rob Pike) Date: Fri, 17 Jun 2022 09:06:16 +1000 Subject: [TUHS] forgotten versions Message-ID: Excited as I was to see this history of Unix code in a single repository: https://github.com/dspinellis/unix-history-repo it continues the long-standing tradition of ignoring all the work done at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, even influential, but to hear this list talk about it - or discussions just about anywhere else - you'd think they never existed. There are exceptions, but this site does reinforce the broadly known version of the story. It's doubly ironic for me because people often mistakenly credit me for working on Unix, but I landed at the Labs after v7 was long dispatched. At the Labs, I first worked on what became v8. I suppose it's because the history flowed as this site shows, with BSD being the driving force for a number of reasons, but it feels to me that a large piece of Unix history has been sidelined. I know it's a whiny lament, but those neglected systems had interesting advances. -rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From earl.baugh at gmail.com Fri Jun 17 09:17:06 2022 From: earl.baugh at gmail.com (Earl Baugh) Date: Thu, 16 Jun 2022 19:17:06 -0400 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: I’ve only cursorily heard of versions past v7. I personally be interested in hearing the history and seeing what changes/improvements/differences came in those versions. I’ve learned that the Unix history I thought I knew had huge gaping holes in it from this list and members. Joe Ossanna’s contributions, for example, were a complete revelation to me. Earl Sent from my iPhone > On Jun 16, 2022, at 7:06 PM, Rob Pike wrote: > >  > Excited as I was to see this history of Unix code in a single repository: > > https://github.com/dspinellis/unix-history-repo > > it continues the long-standing tradition of ignoring all the work done at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, even influential, but to hear this list talk about it - or discussions just about anywhere else - you'd think they never existed. There are exceptions, but this site does reinforce the broadly known version of the story. > > It's doubly ironic for me because people often mistakenly credit me for working on Unix, but I landed at the Labs after v7 was long dispatched. At the Labs, I first worked on what became v8. > > I suppose it's because the history flowed as this site shows, with BSD being the driving force for a number of reasons, but it feels to me that a large piece of Unix history has been sidelined. > > I know it's a whiny lament, but those neglected systems had interesting advances. > > -rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggm at algebras.org Fri Jun 17 09:18:50 2022 From: ggm at algebras.org (George Michaelson) Date: Fri, 17 Jun 2022 09:18:50 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: you're not wrong, but the other take on this is that the AT&T licensing and some other things tended to make the circle of people who could "see" this code significantly smaller than those feeding off Unix 32V/v7 -> BSD -> Solaris. this isn't meant to imply you did anything "wrong" -It was probably a huge distraction having randoms begging for a tape of v8/9/10 with low to no willingness to "give back" -G On Fri, Jun 17, 2022 at 9:06 AM Rob Pike wrote: > > Excited as I was to see this history of Unix code in a single repository: > > https://github.com/dspinellis/unix-history-repo > > it continues the long-standing tradition of ignoring all the work done at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, even influential, but to hear this list talk about it - or discussions just about anywhere else - you'd think they never existed. There are exceptions, but this site does reinforce the broadly known version of the story. > > It's doubly ironic for me because people often mistakenly credit me for working on Unix, but I landed at the Labs after v7 was long dispatched. At the Labs, I first worked on what became v8. > > I suppose it's because the history flowed as this site shows, with BSD being the driving force for a number of reasons, but it feels to me that a large piece of Unix history has been sidelined. > > I know it's a whiny lament, but those neglected systems had interesting advances. > > -rob > From ggm at algebras.org Fri Jun 17 09:44:02 2022 From: ggm at algebras.org (George Michaelson) Date: Fri, 17 Jun 2022 09:44:02 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: Another take on this is Mike Lesk's saying "its easy to occupy a vacuum its harder to push something aside" said of UUCP. v7 exploded into the world, and made BSD and SunOS happen. v8 and 9 and 10 had to work harder to get mindshare because something was already there. things like rc were too "confrontational" to a mind attuned to bourne shell. Sockets (which btw, totally SUCK PUS) were coded into things and even (YECHH) made POSIX and IETF spec status. Streams didn't stand a chance. basically, v7 succeeded too well, for v8/9/10 to get mindshare. I agree it sucks they aren't documented, its just wrong: Serious OS history needs to look beyond the narrow path in view. I'd say anyone who doesn't write about them at length hasn't done their homework. -G On Fri, Jun 17, 2022 at 9:18 AM George Michaelson wrote: > > you're not wrong, but the other take on this is that the AT&T > licensing and some other things tended to make the circle of people > who could "see" this code significantly smaller than those feeding off > Unix 32V/v7 -> BSD -> Solaris. > > this isn't meant to imply you did anything "wrong" -It was probably a > huge distraction having randoms begging for a tape of v8/9/10 with low > to no willingness to "give back" > > -G > > On Fri, Jun 17, 2022 at 9:06 AM Rob Pike wrote: > > > > Excited as I was to see this history of Unix code in a single repository: > > > > https://github.com/dspinellis/unix-history-repo > > > > it continues the long-standing tradition of ignoring all the work done at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, even influential, but to hear this list talk about it - or discussions just about anywhere else - you'd think they never existed. There are exceptions, but this site does reinforce the broadly known version of the story. > > > > It's doubly ironic for me because people often mistakenly credit me for working on Unix, but I landed at the Labs after v7 was long dispatched. At the Labs, I first worked on what became v8. > > > > I suppose it's because the history flowed as this site shows, with BSD being the driving force for a number of reasons, but it feels to me that a large piece of Unix history has been sidelined. > > > > I know it's a whiny lament, but those neglected systems had interesting advances. > > > > -rob > > From lm at mcvoy.com Fri Jun 17 10:10:34 2022 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 16 Jun 2022 17:10:34 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <20220617001034.GA27651@mcvoy.com> On Fri, Jun 17, 2022 at 09:44:02AM +1000, George Michaelson wrote: > v7 exploded into the world, and made BSD and SunOS happen. > > v8 and 9 and 10 had to work harder to get mindshare because something > was already there. I think this is spot on. v7 was pretty easy to find in src form, I know I've seen some of v{8,9,10} in Shannon's treasure trove of Unix source at Sun but they were less common. > things like rc were too "confrontational" to a mind attuned to bourne > shell. Sockets (which btw, totally SUCK PUS) were coded into things > and even (YECHH) made POSIX and IETF spec status. Streams didn't stand > a chance. There was streams (from Dennis) and STREAMS from Sys whatever. I don't know how great streams was, I read the paper and it seemed fine for a tty driver, networking I dunno. And having seen an SGI SMP machine brought to it's knees by racks and racks of modems, I'm not sure streams is even a good idea for ttys; it's fine for a personal system, I've never seen that sort of layered design perform well at scale. I have seen what a networking stack in STREAMS did, it was awful, absolutely awful. Sun bought the STREAMS networking stack from Lachman, same one that I ported to the ETA 10 and SCO Unix, it sucked hard. Sun threw it out, hired Mentat to give them a performant STREAMS stack, I'm not sure that ever worked. I know they put back the socket interface, as much as people don't like it, it's a non-starter to have an OS without it. From dds at aueb.gr Fri Jun 17 17:20:59 2022 From: dds at aueb.gr (Diomidis Spinellis) Date: Fri, 17 Jun 2022 10:20:59 +0300 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <1cd0c358-256a-4636-3019-20fddbee2ff6@aueb.gr> It is indeed problematic that the Unix history repository is missing the Research Editions. At the time I created it, the source code of the Research Unix Eighth and Ninth Editions wasn't openly available. I'm now discussing with another member of this list for a pull request to add them. Incorporating them properly isn't trivial, because various mappings are needed to establish authorship information and to allow git-blame to work across snapshots of moved files. Diomidis - https://www.spinellis.gr/ On 17-Jun-22 2:06, Rob Pike wrote: > Excited as I was to see this history of Unix code in a single repository: > > https://github.com/dspinellis/unix-history-repo > > > it continues the long-standing tradition of ignoring all the work done > at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, > even influential, but to hear this list talk about it - or discussions > just about anywhere else - you'd think they never existed. From robpike at gmail.com Fri Jun 17 17:33:56 2022 From: robpike at gmail.com (Rob Pike) Date: Fri, 17 Jun 2022 17:33:56 +1000 Subject: [TUHS] forgotten versions In-Reply-To: <1cd0c358-256a-4636-3019-20fddbee2ff6@aueb.gr> References: <1cd0c358-256a-4636-3019-20fddbee2ff6@aueb.gr> Message-ID: That's great. In the unlikely event I can help in any way, please let me know. -rob On Fri, Jun 17, 2022 at 5:21 PM Diomidis Spinellis wrote: > It is indeed problematic that the Unix history repository is missing the > Research Editions. At the time I created it, the source code of the > Research Unix Eighth and Ninth Editions wasn't openly available. I'm > now discussing with another member of this list for a pull request to > add them. Incorporating them properly isn't trivial, because various > mappings are needed to establish authorship information and to allow > git-blame to work across snapshots of moved files. > > Diomidis - https://www.spinellis.gr/ > > On 17-Jun-22 2:06, Rob Pike wrote: > > Excited as I was to see this history of Unix code in a single repository: > > > > https://github.com/dspinellis/unix-history-repo > > > > > > it continues the long-standing tradition of ignoring all the work done > > at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, > > even influential, but to hear this list talk about it - or discussions > > just about anywhere else - you'd think they never existed. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Fri Jun 17 18:34:19 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 17 Jun 2022 02:34:19 -0600 Subject: [TUHS] forgotten versions In-Reply-To: <1cd0c358-256a-4636-3019-20fddbee2ff6@aueb.gr> References: <1cd0c358-256a-4636-3019-20fddbee2ff6@aueb.gr> Message-ID: <202206170834.25H8YJ9P002452@freefriends.org> Very cool. I assume you know this, but the Tenth Edition code is also in the TUHS archives and should not be left out. Thanks, Arnold Diomidis Spinellis wrote: > It is indeed problematic that the Unix history repository is missing the > Research Editions. At the time I created it, the source code of the > Research Unix Eighth and Ninth Editions wasn't openly available. I'm > now discussing with another member of this list for a pull request to > add them. Incorporating them properly isn't trivial, because various > mappings are needed to establish authorship information and to allow > git-blame to work across snapshots of moved files. > > Diomidis - https://www.spinellis.gr/ > > On 17-Jun-22 2:06, Rob Pike wrote: > > Excited as I was to see this history of Unix code in a single repository: > > > > https://github.com/dspinellis/unix-history-repo > > > > > > it continues the long-standing tradition of ignoring all the work done > > at Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, > > even influential, but to hear this list talk about it - or discussions > > just about anywhere else - you'd think they never existed. From arnold at skeeve.com Fri Jun 17 20:52:51 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 17 Jun 2022 04:52:51 -0600 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <202206171052.25HAqpsm023417@freefriends.org> Rob Pike wrote: > Excited as I was to see this history of Unix code in a single repository: > > https://github.com/dspinellis/unix-history-repo > > it continues the long-standing tradition of ignoring all the work done at > Bell Labs after v7. I consider v8 v9 v10 to be worth of attention, even > influential, I can think of at least 4 things, some big, some small, where post-V7 Research Unix was influential: - Streams (as STREAMS in S5R3 & greater) - The filesystem switch (in S5R3, replaced by Sun vnodes in S5R4) - /proc ! This lives on in most Unixes and in Linux - /dev/{stdin,stdout,stderr}, /dev/fd/N - Minor but a nice generalization The influence was less from code and more from published papers, but there certainly was a notable influence. I was lucky enough in the late 80s and 90s to have an inside friend in the labs (BWK), who was kind enough to obtain for me a real printed Eighth Edition manual. Later he put me in touch with Doug who at first wasn't sure, but found out that the could, sell me a Ninth Edition manual. ($50 IIRC). I bought the published Tenth Edition manuals as well. It was great to read those things, even if at the time I couldn't get to the code. For whatever it's worth, Arnold From pnr at planet.nl Sat Jun 18 00:50:13 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Fri, 17 Jun 2022 16:50:13 +0200 Subject: [TUHS] forgotten versions Message-ID: Wholeheartedly agree with the observations on forgotten versions, lack of source and a smaller pool of people back in the day. It is not just the Research versions, also the internal AT&T versions and the base System V versions get little attention. Same reasons I think. Luckily, these days the sources are available (although in the case of SysV of unclear legal status). Part of the problem I think is that it is not well known what innovations are in each version. About 2 years ago I did a lot of spelunking through the V8 source and with the help of this list could come up with a list of highlights for V8 (text is now on the TUHS V8 source web page). Never had the time to do that for V9. I think it was mentioned that it had a new filesystem with a bitmap free list. Also, it seems to have a lot of cleaned-up implementations of things that were new and rough in V8. No clue what was new in V10. Similar with Unix 3, Unix 4 and Unix 5. I’m thrilled that the docs for Unix 4 showed up recently. In these doc’s there is no material on IPC. From this I think that the IPC primitives from CB-Unix did not get merged in Unix 4, but only in Unix 5 (in a reworked form). Personally, I’m still working (off and on) on recreating the Reiser demand paging system. To keep it interesting I’ve now got Sys III running on a small RISC-V board, and when I find another time slot I’ll try to add Reiser paging to it. So the forgotten versions are only mostly forgotten, not totally forgotten :^) From pnr at planet.nl Sat Jun 18 01:18:14 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Fri, 17 Jun 2022 17:18:14 +0200 Subject: [TUHS] sockets [was Re: forgotten versions] Message-ID: > Sockets (which btw, totally SUCK PUS) were coded into things > and even (YECHH) made POSIX and IETF spec status. Streams didn't stand > a chance. The question that originally pulled me into researching Unix networking 1975-1985 was more or less “how did we end up with sockets?”. That was 7 years or so ago, I now have a basic feel for how it came to be, and I also have better appreciation of the trade offs. What is the most “Unixy” of networking (as in the API and its semantics) is not something with an easy answer. If I limit myself to the 1975-1985 time frame, I see three approaches: 1. The API used in Arpanet Unix, which was also used by BBN in its first reference implementation of TCP/IP 2. The BSD sockets API, in two flavours: the Joy flavour in BSD4.1a, and the Karels flavour in BSD4.1c and later 3. The Ritchie/Presotto IPC library / API from V8/V9. This evolved into SysV networking, but the original is the clean idea At a high level of abstraction, there is a lot of similarity; close-up they are quite different. I like all three solutions! One thing that struck my attention was that the Ritchie/Presotto IPC library has the concept of “calling” a host and the host/network can reply with a response code (“line busy”, “number unknown”, “not authorised”, etc.). BSD sockets do not cover that. I guess it derives from Spider/Datakit having that functionality, and Arpanet / tcp-ip not having that (resorting to a connection ‘reset’ or dead line instead). Sockets have a more elegant solution for connectionless datagrams (imo), and for the same reason I guess. Sure, sockets has too much of the implementation sticking through the abstractions, but it is IMO not a bad design. That it became dominant I think is in equal measure due to economics and due to being “good enough”. If someone has a proposal for a network API that is cleaner and better than what was out there, and would have worked with the hardware and use cases of the early 80’s, I’m all ears. But maybe better on COFF... Paul From bakul at iitbombay.org Sat Jun 18 02:23:57 2022 From: bakul at iitbombay.org (Bakul Shah) Date: Fri, 17 Jun 2022 09:23:57 -0700 Subject: [TUHS] Sockets vs Streams (was Re: forgotten versions In-Reply-To: References: Message-ID: On Jun 16, 2022, at 4:44 PM, George Michaelson wrote: > > Sockets (which btw, totally SUCK PUS) were coded into things > and even (YECHH) made POSIX and IETF spec status. Streams didn't stand > a chance. The stream abstraction is a nice (c)lean abstraction but it doesn't quite work for things like multicast or datagrams in general. Plan9 doesn't have sockets but the way it deals with UDP is not simple either. The complexity is in the protocols themselves. Even at layer 2 (below the IP layer) the amount of complexity is mind boggling (though I suppose high-speed backbone switches do all this in hardware!). From paul.winalski at gmail.com Sat Jun 18 03:43:13 2022 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 17 Jun 2022 13:43:13 -0400 Subject: [TUHS] Sockets vs Streams (was Re: forgotten versions In-Reply-To: References: Message-ID: On 6/17/22, Bakul Shah wrote: > > The stream abstraction is a nice (c)lean abstraction but it doesn't > quite work for things like multicast or datagrams in general. Every networking protocol I know of involves the exchange of discrete packets of data and thus is inherently record-based. In my experience, layering a stream-oriented interface on top of that usually means that the software at the layers above that have to take extra measures to reconstruct the original record-oriented packets. -Paul W. From norman at oclsc.org Sat Jun 18 04:54:45 2022 From: norman at oclsc.org (Norman Wilson) Date: Fri, 17 Jun 2022 14:54:45 -0400 (EDT) Subject: [TUHS] Sockets vs Streams (was Re: forgotten versions Message-ID: As one of the few remaining people who has actually worked with the original dmr stream I/O system, I'd love to dive into the debate here, but I don't have time. I can't resist commenting on a few things that have flown by, though. Maybe I can find time to engage better on the weekend. -- If you think there were no record delimiters in the stream system, you don't know enough about it. A good resource to repair that is Dennis's BSTJ paper: https://www.bell-labs.com/usr/dmr/www/st.pdf It's an early description and some of the details later evolved (for example we realized that once pipes were streams, we no longer needed pseudoterminals) but the fundamentals remained constant. See the section about Message blocks, and the distinction between data and control blocks. Delimiters were one kind of control; there were others, some of them potentially specific to the network at hand. In particular, Datakit (despite being a virtual-circuit network) had several sorts of control words, including message delimiters. Delimiters were necessary even just for terminals, though: how else does the generic read(2) code know to return early, before filling the buffer, when a complete line has arrived? -- It's hard to compare sockets to streams and make much sense, because they are very different kinds of thing. When people talk about sockets, especially when complaining that sockets are a mess, they usually mean the API: system calls like connect and listen and getpeername and so on. The stream system was mainly about the internal structure-- the composable modules and the general-purpose queue interface between them, so that you could take a stream representing an (already set up) network connection and push the tty module on it and have a working remote terminal, no need for user-mode programs and pseudo-terminals. It's not inconceivable to build a system with socket-like API and stream internals. -- Connection setup was initially done with network-specific magic messages and magic ioctls. Later we moved the knowledge of that messy crap into network-specific daemons, so a user program could make a network call just by calling fd = ipcopen("string-destination-name") without knowing or caring whether the network transport was TCP or Datakit or involved forwarding over Datakit to a gateway that then placed a TCP call to the Internet or whatnot. That's what the connection server was all about: https://www.bell-labs.com/usr/dmr/www/spe.pdf Again, the API is not specific to the stream system. It wouldn't be hard to write a connection server that provided the same universal just-use-a-string interface (with the privileged parts or network details left up to daemons) on a system with only socket networking; the only subtle bit is that it needs to be possible to pass an open file descriptor from one process to another (on the same system), which I don't think the socket world had early on but I believe they added long ago. -- There's nothing especially hard about UDP or broadcast. It's not as if the socket abstraction has some sort of magic datagram-specific file descriptor. Since every message sent and every message received has to include the far end's address info, you have to decide how to do that, whether by specifying a format for the data (the first N bytes are always the remote's address, for example) or provide an out-of-band mechanism (some ioctl mess that lets you supply it separately, a la sendto/recvfrom, and encodes it as a control message). There was an attempt to make UDP work in the 9th/10th edition era. I don't think it ever worked very cleanly. When I took an unofficial snapshot and started running the system at home in the mid-1990s, I ended up just tossing UDP out, because I didn't urgently need it (at the time TCP was good enough for DNS, and I had to write my own DNS resolver anyway). I figured I'd get around to fixing it later but never did. But I think the only hard part is in deciding on an interface. -- It's certainly true that the Research-system TCP/IP code was never really production-quality (and I say that even though I used it for my home firewall/gateway for 15 years). TCP/IP wasn't taken as seriously as it ought to have been by most of us in 1127 in the 1980s. But that wasn't because of the stream structure--the IP implementation was in fact a copy of that from 4.2 (I think) BSD, repackaged and shoehorned into the stream world by Robert T Morris, and later cleaned up quite a bit by Paul Glick. Maybe it would have worked better had it been done from scratch by someone who cared a lot about it, as the TCP/IP implementors in the BSD world very much did. Certainly it's a non-trivial design problem--the IP protocols and their sloppy observance of layering (cf the `pseudo header' in the TCP and UDP standards) make them more complicated to implement in a general-purpose framework. Or maybe it just can't be done, but I wish someone had tried in the original simpler setup rather than the cluttered SVr4 STREAMS. Norman Wilson Toronto ON From drsalists at gmail.com Sat Jun 18 08:52:31 2022 From: drsalists at gmail.com (Dan Stromberg) Date: Fri, 17 Jun 2022 15:52:31 -0700 Subject: [TUHS] Sockets vs Streams (was Re: forgotten versions In-Reply-To: References: Message-ID: On Fri, Jun 17, 2022 at 9:24 AM Bakul Shah wrote: > On Jun 16, 2022, at 4:44 PM, George Michaelson wrote: > > > > Sockets (which btw, totally SUCK PUS) were coded into things > > and even (YECHH) made POSIX and IETF spec status. Streams didn't stand > > a chance. > > The stream abstraction is a nice (c)lean abstraction but it doesn't > quite work for things like multicast or datagrams in general. Plan9 > doesn't have sockets but the way it deals with UDP is not simple either. > The complexity is in the protocols themselves. Even at layer 2 (below > the IP layer) the amount of complexity is mind boggling (though I > suppose high-speed backbone switches do all this in hardware!). > I've heard good things about Streams, but never really had a problem with Sockets once I realized that send's and recv's don't necessarily have a 1-1 correspondence. I do think that Sockets need something analogous to stdio though. And I believe inetd allowed you to do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcilroy at dartmouth.edu Sat Jun 18 10:35:06 2022 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Fri, 17 Jun 2022 20:35:06 -0400 Subject: [TUHS] forgotten versions Message-ID: > I can think of at least 4 things, some big, some small, where post-V7 > Research Unix was influential Besides streams, file system switch, /proc, and /dev/fd. v8 had the Blit. Though Rob's relevant patent evoked disgruntled rumblings from MIT that window systems were old hat, the Blit pioneered multiple windows as we know them today. On the contemporary Lisp Machine, for example, active computation happened in only one window at a time. V8 also had Peter Weinberger's Remote File System. Unlike NFS, RFS mapped UIDS, thus allowing files to be shared among computers in different jurisdictions with different UID lists. Unfortunately, RFS went the way of Reiser paging. And then there was Norman Wilson, who polished the kernel and administrative tools. All kinds of things became smaller and cleaner--an inimitable accomplishment > No clue what was new in V10 This suggests I should put on my to-do list an update of the Research Unix Reader's combined table of man-page contents, which covers only v1-v9. I think it's fair to say, though, that nothing introduced in v10 was as influential as the features mentioned above. Doug From kevin.bowling at kev009.com Sat Jun 18 15:00:19 2022 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Fri, 17 Jun 2022 22:00:19 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: On Fri, Jun 17, 2022 at 5:35 PM Douglas McIlroy < douglas.mcilroy at dartmouth.edu> wrote: > > I can think of at least 4 things, some big, some small, where post-V7 > > Research Unix was influential > > Besides streams, file system switch, /proc, and /dev/fd. v8 had the > Blit. Though Rob's relevant patent evoked disgruntled rumblings from > MIT that window systems were old hat, the Blit pioneered multiple > windows as we know them today. On the contemporary Lisp Machine, for > example, active computation happened in only one window at a time. > > V8 also had Peter Weinberger's Remote File System. Unlike NFS, RFS > mapped UIDS, thus allowing files to be shared among computers in > different jurisdictions with different UID lists. Unfortunately, RFS > went the way of Reiser paging. > I believe RFS shipped in SVR3, at least as a package for the 3b2. > And then there was Norman Wilson, who polished the kernel and > administrative tools. All kinds of things became smaller and > cleaner--an inimitable accomplishment > > > No clue what was new in V10 > > This suggests I should put on my to-do list an update of the Research > Unix Reader's combined table of man-page contents, which covers only > v1-v9. I think it's fair to say, though, that nothing introduced in > v10 was as influential as the features mentioned above. > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From athornton at gmail.com Sat Jun 18 15:13:39 2022 From: athornton at gmail.com (Adam Thornton) Date: Fri, 17 Jun 2022 22:13:39 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <33F19BA1-6F43-4B0A-AC9F-D57FBB30675E@gmail.com> Could users outside Bell Labs actually get their hands on post-v7 Research Unixes? It was always my impression that The Thing You Could Get From The Phone Company, after v7, was System III or System V. Obviously it's not surprising that Research Unix features from later versions ended up in SysV, but did anyone actually learn about them from v8-v10, or just by way of SysV ? Was there some (legal) mechanism for the post-v7 Unixes to get out into people's hands? Adam From aap at papnet.eu Sat Jun 18 17:05:41 2022 From: aap at papnet.eu (Angelo Papenhoff) Date: Sat, 18 Jun 2022 09:05:41 +0200 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: To make people more aware of post-v7 Research UNIX it would be great if you could actually run all of them in a simulator and have the manuals available. V8 is working perfectly in simh and there's blit (jerq) emulation as well. DMD 5620 emulation should be possible as well with Seth Morabito's emulator, but as far as I understand it needs a different ROM that we don't have a dump of. (I've had a real 5620 connected to my laptop running v8 in simh, it worked perfectly) V9 exists as a port to Sun-3 and it can actually be booted apparently. The source seems incomplete, but the VAX kernel source seems to be included as well. Maybe it could be gotten to run in simh on a VAX in some form or another? V10 exists but not as anything that boots. I think getting this to work would be the holy grail but also requires quite a bit of effort. I don't know if the V8 and V10 file systems are compatible, but if that is the case one could probably start by bootstrapping from V8. It also includes the multilevel-secure IX system and software for the 630 MTG terminal. As for the manual... The V8 files have the man pages but not much of the documents. The V9 files seem to have neither. The V10 files have both the man pages and the documents but I have not yet tried to troff any of this. Since I know at least the V10 manual to be a work of art and beauty I think it should be available to everyone. I have not seen the physical V8 and V9 manuals, but if they look anything like the V10 one, they too deserve to be available to the public. Does anyone have a plan of attack? I'd gladly join some effort to make the research systems more visible or available again (but probably don't have the motivation to do so alone). Angelo/aap From clemc at ccc.com Sun Jun 19 02:58:02 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 18 Jun 2022 12:58:02 -0400 Subject: [TUHS] forgotten versions In-Reply-To: <33F19BA1-6F43-4B0A-AC9F-D57FBB30675E@gmail.com> References: <33F19BA1-6F43-4B0A-AC9F-D57FBB30675E@gmail.com> Message-ID: TUHS Source Archive BTL Research Distributions you should find them all. ᐧ On Sat, Jun 18, 2022 at 1:13 AM Adam Thornton wrote: > Could users outside Bell Labs actually get their hands on post-v7 Research > Unixes? > > It was always my impression that The Thing You Could Get From The Phone > Company, after v7, was System III or System V. Obviously it's not > surprising that Research Unix features from later versions ended up in > SysV, but did anyone actually learn about them from v8-v10, or just by way > of SysV ? > > Was there some (legal) mechanism for the post-v7 Unixes to get out into > people's hands? > > Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sun Jun 19 03:18:55 2022 From: imp at bsdimp.com (Warner Losh) Date: Sat, 18 Jun 2022 11:18:55 -0600 Subject: [TUHS] forgotten versions In-Reply-To: References: <33F19BA1-6F43-4B0A-AC9F-D57FBB30675E@gmail.com> Message-ID: Are these systems bootable? I see all the source, but recall previous discussions about how bootstrapping them was tricky, or at least involved a large number of steps, each of which wasn't bad, but the whole path wasn't well mapped out. For V[67] we at least have boot tapes from back in the day, and V5 has a bootable disk image... Warner On Sat, Jun 18, 2022 at 10:58 AM Clem Cole wrote: > TUHS Source Archive BTL Research Distributions > > you should find them all. > ᐧ > > On Sat, Jun 18, 2022 at 1:13 AM Adam Thornton wrote: > >> Could users outside Bell Labs actually get their hands on post-v7 >> Research Unixes? >> >> It was always my impression that The Thing You Could Get From The Phone >> Company, after v7, was System III or System V. Obviously it's not >> surprising that Research Unix features from later versions ended up in >> SysV, but did anyone actually learn about them from v8-v10, or just by way >> of SysV ? >> >> Was there some (legal) mechanism for the post-v7 Unixes to get out into >> people's hands? >> >> Adam > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Sun Jun 19 03:57:14 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 18 Jun 2022 13:57:14 -0400 Subject: [TUHS] forgotten versions In-Reply-To: References: <33F19BA1-6F43-4B0A-AC9F-D57FBB30675E@gmail.com> Message-ID: On Sat, Jun 18, 2022 at 1:19 PM Warner Losh wrote: > Are these systems bootable? > It has been so reported - in this same space IIRC > I see all the source, but recall previous discussions > about how bootstrapping them was tricky, or at least involved a large > number of > steps, each of which wasn't bad, but the whole path wasn't well mapped out. > I believe that is also correct. Been a ToDo item of mine to try to get them running. Supposedly people used V8 to get V9 running. > For V[67] we at least have boot tapes from back in the day, and V5 has a > bootable disk image... > Yep and it helps, some of us did it back in the day. The problem has been expressed, that V{8,9,10} like Plan9, were locked away for whatever reasons. They were all created post-Judge Green when the behavior of AT&T formally WRT to the rest of the world changed. I don't think the people in 1127's attitude changed (certainly not of my friends that I interacted with) but what they could do was more constrained. Before the 1956 consent decree, AT&T corporate was not allowed to be in the computer business, post-Judge Green they were actively trying to be and their SW was formally System III and the later System V versions. BTW: AT&T was unique in this behavior. IBM and DEC did it too. One of my favorite stories (that I personally lived) is that of Motorola, which became the 68000. When I first got it (at Teklabs) it did not have a number - which much, much later beget the 4404, it was explicitly told to us that it was a toy and it was not committed. We managed to get approx $100 of them to make the first Magnolia machine But the original developers had given a couple of them to the research teams of a few of their friends - the 6809 was the official product. Famously, when IBM asked Moto to bid on a processor for what was to later become the PC, they had been playing with the future 68000 in NY and Conn, already. When the folks came to Austin, IBM was pressured to use the 6809 by Motorola marketing, and officially told that the other chip had no future and was an experiment. I always looked at V{8,9,10} and Plan9 in the same way. BTW: I also think that's part of why BSD got such a lead. AT&T Marketing kept the 'consider it standard' stuff in people's faces with System III and later Sys V. Many users (like me) and our firms wanted no part of it. If AT&T had been offered V{8,9,10} or Plan9 under the same basic terms that V7 had been, I suspect that the story might have had a different ending. Clem > ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robpike at gmail.com Sun Jun 19 17:50:57 2022 From: robpike at gmail.com (Rob Pike) Date: Sun, 19 Jun 2022 17:50:57 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: The VAX Plan 9 kernel isn't worth anything. It never worked, was never used, and was abandoned completely when better SMP machines started appearing. The VAX code wasn't even ported, as I remember it; Ken and I started over from scratch with a pair of 4-core SGI machines with MIPS CPUs and wackadoo synchronization hardware. -rob On Sat, Jun 18, 2022 at 5:05 PM Angelo Papenhoff wrote: > To make people more aware of post-v7 Research UNIX it would be great if > you could actually run all of them in a simulator and have the manuals > available. > > V8 is working perfectly in simh and there's blit (jerq) emulation as well. > DMD 5620 emulation should be possible as well with Seth Morabito's > emulator, but as far as I understand it needs a different ROM that we > don't have a dump of. (I've had a real 5620 connected to my laptop > running v8 in simh, it worked perfectly) > > V9 exists as a port to Sun-3 and it can actually be booted apparently. > The source seems incomplete, but the VAX kernel source seems to be > included as well. Maybe it could be gotten to run in simh on a VAX > in some form or another? > > V10 exists but not as anything that boots. I think getting this to work > would be the holy grail but also requires quite a bit of effort. > I don't know if the V8 and V10 file systems are compatible, but if that > is the case one could probably start by bootstrapping from V8. > It also includes the multilevel-secure IX system and software for the > 630 MTG terminal. > > > As for the manual... > > The V8 files have the man pages but not much of the documents. > > The V9 files seem to have neither. > > The V10 files have both the man pages and the documents but I have not > yet tried to troff any of this. > > Since I know at least the V10 manual to be a work of art and beauty I > think it should be available to everyone. I have not seen the physical > V8 and V9 manuals, but if they look anything like the V10 one, they too > deserve to be available to the public. > > > Does anyone have a plan of attack? I'd gladly join some effort to make > the research systems more visible or available again (but probably don't > have the motivation to do so alone). > > Angelo/aap > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aap at papnet.eu Sun Jun 19 18:17:00 2022 From: aap at papnet.eu (Angelo Papenhoff) Date: Sun, 19 Jun 2022 10:17:00 +0200 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: I wasn't talking about Plan 9, but it's interesting to know that there was an attempt at a VAX kernel. On 19/06/22, Rob Pike wrote: > The VAX Plan 9 kernel isn't worth anything. It never worked, was never > used, and was abandoned completely when better SMP machines started > appearing. The VAX code wasn't even ported, as I remember it; Ken and I > started over from scratch with a pair of 4-core SGI machines with MIPS CPUs > and wackadoo synchronization hardware. > > -rob From robpike at gmail.com Sun Jun 19 18:53:03 2022 From: robpike at gmail.com (Rob Pike) Date: Sun, 19 Jun 2022 18:53:03 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: Aha, yes, my mistake, sorry about that. I bet I misread that mail because of the mention of a Sun port, which put me in a nostalgic (read: resentful) mood. Anyway, knowing about the Plan 9 VAX kernel might be interesting, but the kernel itself was not. -rob On Sun, Jun 19, 2022 at 6:17 PM Angelo Papenhoff wrote: > I wasn't talking about Plan 9, but it's interesting to know that there > was an attempt at a VAX kernel. > > On 19/06/22, Rob Pike wrote: > > The VAX Plan 9 kernel isn't worth anything. It never worked, was never > > used, and was abandoned completely when better SMP machines started > > appearing. The VAX code wasn't even ported, as I remember it; Ken and I > > started over from scratch with a pair of 4-core SGI machines with MIPS > CPUs > > and wackadoo synchronization hardware. > > > > -rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aap at papnet.eu Sun Jun 19 19:02:23 2022 From: aap at papnet.eu (Angelo Papenhoff) Date: Sun, 19 Jun 2022 11:02:23 +0200 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: It brings up one other question for me though: rsc made a repo of the plan 9 kernel source code (https://9p.io/sources/extra/9hist/) (which can now be found in git format too: https://github.com/0intro/9hist) But what about the non-kernel parts of the system? Is there a chance of getting the user space stuff as well? Does it even still exist? aap On 19/06/22, Rob Pike wrote: > Aha, yes, my mistake, sorry about that. I bet I misread that mail because > of the mention of a Sun port, which put me in a nostalgic (read: resentful) > mood. > > Anyway, knowing about the Plan 9 VAX kernel might be interesting, but the > kernel itself was not. > > -rob > > > On Sun, Jun 19, 2022 at 6:17 PM Angelo Papenhoff wrote: > > > I wasn't talking about Plan 9, but it's interesting to know that there > > was an attempt at a VAX kernel. > > > > On 19/06/22, Rob Pike wrote: > > > The VAX Plan 9 kernel isn't worth anything. It never worked, was never > > > used, and was abandoned completely when better SMP machines started > > > appearing. The VAX code wasn't even ported, as I remember it; Ken and I > > > started over from scratch with a pair of 4-core SGI machines with MIPS > > CPUs > > > and wackadoo synchronization hardware. > > > > > > -rob > > From arnold at skeeve.com Sun Jun 19 19:14:26 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 19 Jun 2022 03:14:26 -0600 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <202206190914.25J9EQ3L022848@freefriends.org> See https://plan9foundation.org/ and https://p9f.org/dl/ in particular to download the sources. Arnold Angelo Papenhoff wrote: > It brings up one other question for me though: > rsc made a repo of the plan 9 kernel source code (https://9p.io/sources/extra/9hist/) > (which can now be found in git format too: https://github.com/0intro/9hist) > But what about the non-kernel parts of the system? > Is there a chance of getting the user space stuff as well? Does it even > still exist? > > aap > > On 19/06/22, Rob Pike wrote: > > Aha, yes, my mistake, sorry about that. I bet I misread that mail because > > of the mention of a Sun port, which put me in a nostalgic (read: resentful) > > mood. > > > > Anyway, knowing about the Plan 9 VAX kernel might be interesting, but the > > kernel itself was not. > > > > -rob > > > > > > On Sun, Jun 19, 2022 at 6:17 PM Angelo Papenhoff wrote: > > > > > I wasn't talking about Plan 9, but it's interesting to know that there > > > was an attempt at a VAX kernel. > > > > > > On 19/06/22, Rob Pike wrote: > > > > The VAX Plan 9 kernel isn't worth anything. It never worked, was never > > > > used, and was abandoned completely when better SMP machines started > > > > appearing. The VAX code wasn't even ported, as I remember it; Ken and I > > > > started over from scratch with a pair of 4-core SGI machines with MIPS > > > CPUs > > > > and wackadoo synchronization hardware. > > > > > > > > -rob > > > From aap at papnet.eu Sun Jun 19 19:19:35 2022 From: aap at papnet.eu (Angelo Papenhoff) Date: Sun, 19 Jun 2022 11:19:35 +0200 Subject: [TUHS] forgotten versions In-Reply-To: <202206190914.25J9EQ3L022848@freefriends.org> References: <202206190914.25J9EQ3L022848@freefriends.org> Message-ID: That's only the releases. I meant the earlier pre-1e code. aap On 19/06/22, arnold at skeeve.com wrote: > See https://plan9foundation.org/ and https://p9f.org/dl/ in particular > to download the sources. > > Arnold From arnold at skeeve.com Sun Jun 19 19:23:50 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 19 Jun 2022 03:23:50 -0600 Subject: [TUHS] forgotten versions In-Reply-To: References: <202206190914.25J9EQ3L022848@freefriends.org> Message-ID: <202206190923.25J9No8c024167@freefriends.org> Maybe it exists on the WORM drive at Bell Labs... :-( Angelo Papenhoff wrote: > That's only the releases. I meant the earlier pre-1e code. > > aap > > On 19/06/22, arnold at skeeve.com wrote: > > See https://plan9foundation.org/ and https://p9f.org/dl/ in particular > > to download the sources. > > > > Arnold From m at mbsks.franken.de Sun Jun 19 21:37:02 2022 From: m at mbsks.franken.de (Matthias Bruestle) Date: Sun, 19 Jun 2022 13:37:02 +0200 Subject: [TUHS] forgotten versions In-Reply-To: <202206190923.25J9No8c024167@freefriends.org> References: <202206190914.25J9EQ3L022848@freefriends.org> <202206190923.25J9No8c024167@freefriends.org> Message-ID: More like a WORN drive when nobody is looking at it. On Sun, Jun 19, 2022 at 03:23:50AM -0600, arnold at skeeve.com wrote: > Maybe it exists on the WORM drive at Bell Labs... :-( > > Angelo Papenhoff wrote: > > > That's only the releases. I meant the earlier pre-1e code. -- When You Find Out Your Normal Daily Lifestyle Is Called Quarantine From kennethgoodwin56 at gmail.com Mon Jun 20 00:47:23 2022 From: kennethgoodwin56 at gmail.com (Kenneth Goodwin) Date: Sun, 19 Jun 2022 10:47:23 -0400 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: Just chiming in a bit.. Rob, it might be interesting to old geezers like me as well as newbies entering the field to get a perspective on Plan 9 and its evolution. The motivations behind it. What your group was trying to accomplish, the approach, pitfalls and the entire decision making process as things went along. Even things that went horribly wrong and what happened etc. Sorry for my potential ignorance here, but other than the documents that come with the source code distribution, there does not seem to be any official textbook style document available with that level of detail going into the evolution and back story. I am thinking more in terms of the multics book that came out after that project failed. Perhaps even some college level tutorial videos on YouTube that do a deep dive. Leaving your group's collective wisdom and insight for prosperity. Anatomy of a Research Operating System. Approaching the Design of a modern day distributed Operating Systems, practices and pitfalls. (You can stop laughing 😃 now....) On Sun, Jun 19, 2022, 4:53 AM Rob Pike wrote: > Aha, yes, my mistake, sorry about that. I bet I misread that mail because > of the mention of a Sun port, which put me in a nostalgic (read: resentful) > mood. > > Anyway, knowing about the Plan 9 VAX kernel might be interesting, but the > kernel itself was not. > > -rob > > > On Sun, Jun 19, 2022 at 6:17 PM Angelo Papenhoff wrote: > >> I wasn't talking about Plan 9, but it's interesting to know that there >> was an attempt at a VAX kernel. >> >> On 19/06/22, Rob Pike wrote: >> > The VAX Plan 9 kernel isn't worth anything. It never worked, was never >> > used, and was abandoned completely when better SMP machines started >> > appearing. The VAX code wasn't even ported, as I remember it; Ken and I >> > started over from scratch with a pair of 4-core SGI machines with MIPS >> CPUs >> > and wackadoo synchronization hardware. >> > >> > -rob >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at quintile.net Mon Jun 20 02:01:22 2022 From: steve at quintile.net (Steve Simon) Date: Sun, 19 Jun 2022 17:01:22 +0100 Subject: [TUHS] early plan9 Message-ID: <3535AFC5-CD9C-4FAA-8AFC-CDB1437615D7@quintile.net> interesting to know the vax was a complete dead end. i do remember jmk (rip) reporting on 9fans, maybe even releasing, the vax plan9 kenc compiler he discovered in a dusty corner of the dump filesystem. I was intrigued and asked if there was anything else, but he said he said there where no kernel or driver fragments to go with it. -Steve From aek at bitsavers.org Mon Jun 20 02:27:58 2022 From: aek at bitsavers.org (Al Kossow) Date: Sun, 19 Jun 2022 09:27:58 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <6faab698-5493-55a9-7a5b-41acd3f4d995@bitsavers.org> On 6/19/22 7:47 AM, Kenneth Goodwin wrote: > Perhaps even some college level tutorial videos on YouTube Put it in writing Eschew ewe toob From tytso at mit.edu Mon Jun 20 04:32:45 2022 From: tytso at mit.edu (Theodore Ts'o) Date: Sun, 19 Jun 2022 14:32:45 -0400 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: On Sun, Jun 19, 2022 at 10:47:23AM -0400, Kenneth Goodwin wrote: > Just chiming in a bit.. > > Rob, it might be interesting to old geezers like me as well as newbies > entering the field to get a perspective on Plan 9 and its evolution. The > motivations behind it. What your group was trying to accomplish, the > approach, pitfalls and the entire decision making process as things went > along. Even things that went horribly wrong and what happened etc. I'll second that. I think it would be really helpful. There was a time when I was reviewing a paper which made a bunch of claims about what Plan 9 was trying to accomplish and in particular about what the ultimate design goals for a particular component of Plan 9. (I won't go into further details since as far as I know, that paper was never published.) In any case, since I wasn't familiar with the history of Plan 9 to evaluate these claims, with the permission of the PC chairs, I found someone who had been part of the Plan 9 team, and asked them to review certain passages for accuracy, and they said, "Uh, no.... that's totally not the case. They're completely wrong." So if someone were willing to create additional write ups about lessons learned, or if that's too much work, maybe someone could do some interview for a podcast or a vlog, that would be really excellent. - Ted From crossd at gmail.com Mon Jun 20 04:38:26 2022 From: crossd at gmail.com (Dan Cross) Date: Sun, 19 Jun 2022 14:38:26 -0400 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: On Sun, Jun 19, 2022 at 2:33 PM Theodore Ts'o wrote: > On Sun, Jun 19, 2022 at 10:47:23AM -0400, Kenneth Goodwin wrote: > > Just chiming in a bit.. > > > > Rob, it might be interesting to old geezers like me as well as newbies > > entering the field to get a perspective on Plan 9 and its evolution. The > > motivations behind it. What your group was trying to accomplish, the > > approach, pitfalls and the entire decision making process as things went > > along. Even things that went horribly wrong and what happened etc. > > I'll second that. I think it would be really helpful. > > There was a time when I was reviewing a paper which made a bunch of > claims about what Plan 9 was trying to accomplish and in particular > about what the ultimate design goals for a particular component of > Plan 9. (I won't go into further details since as far as I know, that > paper was never published.) > > In any case, since I wasn't familiar with the history of Plan 9 to > evaluate these claims, with the permission of the PC chairs, I found > someone who had been part of the Plan 9 team, and asked them to review > certain passages for accuracy, and they said, "Uh, no.... that's > totally not the case. They're completely wrong." > > So if someone were willing to create additional write ups about > lessons learned, or if that's too much work, maybe someone could do > some interview for a podcast or a vlog, that would be really > excellent. Agreed. A retrospective would be a very welcome addition to the canon. - Dan C. (PS: I _had_ heard of the VAX effort before, but I don't think I'd known quite how nascent it was before it was abandoned in favor of MIPS and 68k.) From dfawcus+lists-tuhs at employees.org Mon Jun 20 06:46:31 2022 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Sun, 19 Jun 2022 21:46:31 +0100 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: Message-ID: On Fri, Jun 17, 2022 at 10:00:19PM -0700, Kevin Bowling wrote: > On Fri, Jun 17, 2022 at 5:35 PM Douglas McIlroy douglas.mcilroy at dartmouth.edu> wrote: > > > > V8 also had Peter Weinberger's Remote File System. Unlike NFS, RFS > > mapped UIDS, thus allowing files to be shared among computers in > > different jurisdictions with different UID lists. Unfortunately, RFS > > went the way of Reiser paging. > > I believe RFS shipped in SVR3, at least as a package for the 3b2. Apparently. I've a book (ISBN 0-672-48440-4) with a short chapter on it within, authored by Douglas Harris. It happens to state: AT&T's approach towards UNIX System V, Release 3.0 and beyond is to provide a /Remote File System/ (RFS) that is an extension of the ordinary file system arrangement. […] […] Remote File System Release 1.0 was first introduced in 1986 with Release 3.0 of UNIX System V for AT&T 3B2 machines with Starlan network connections. It makes heavy use of STREAMS, which were also introduced at that time. The next release RFS 1.1, accompanying System V Release 3.1, was greatly enhanced. At this time releases for other machines became available. In particular, with the release of a standard UNIX for Intel 80386-based machines that incorporated STREAMS, vendors of networking products could arrange for RFS to operate with those products, and RFS could run over Ethernet or any other network that could support a solid transport connection such as TCP/IP or NetBIOS. […] So that may be somewhere to search, possibly someone can find a '386 image with it included? DF From lm at mcvoy.com Mon Jun 20 09:07:20 2022 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 19 Jun 2022 16:07:20 -0700 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: Message-ID: <20220619230720.GJ26016@mcvoy.com> On Sun, Jun 19, 2022 at 09:46:31PM +0100, Derek Fawcus wrote: > On Fri, Jun 17, 2022 at 10:00:19PM -0700, Kevin Bowling wrote: > > On Fri, Jun 17, 2022 at 5:35 PM Douglas McIlroy douglas.mcilroy at dartmouth.edu> wrote: > > > > > > V8 also had Peter Weinberger's Remote File System. Unlike NFS, RFS > > > mapped UIDS, thus allowing files to be shared among computers in > > > different jurisdictions with different UID lists. Unfortunately, RFS > > > went the way of Reiser paging. > > > > I believe RFS shipped in SVR3, at least as a package for the 3b2. > > Apparently. I've a book (ISBN 0-672-48440-4) with a short chapter on it within, authored by Douglas Harris. > > It happens to state: > > AT&T's approach towards UNIX System V, Release 3.0 and beyond is to provide a /Remote File System/ (RFS) that is an extension of the ordinary file system arrangement. [???] SunOS 4.x shipped RFS, Howard Chartok (my office mate at the time) did the port I believe. Thankless work since Sun ran their entire campus on NFS; RFS never got any attention. It's too bad because it did solve some problems that NFS just punted on. NFS is Clem's law in action, it was good enough, not great, but still won. From brad at anduin.eldar.org Mon Jun 20 09:19:31 2022 From: brad at anduin.eldar.org (Brad Spencer) Date: Sun, 19 Jun 2022 19:19:31 -0400 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: <20220619230720.GJ26016@mcvoy.com> (message from Larry McVoy on Sun, 19 Jun 2022 16:07:20 -0700) Message-ID: Larry McVoy writes: > On Sun, Jun 19, 2022 at 09:46:31PM +0100, Derek Fawcus wrote: >> On Fri, Jun 17, 2022 at 10:00:19PM -0700, Kevin Bowling wrote: >> > On Fri, Jun 17, 2022 at 5:35 PM Douglas McIlroy douglas.mcilroy at dartmouth.edu> wrote: >> > > >> > > V8 also had Peter Weinberger's Remote File System. Unlike NFS, RFS >> > > mapped UIDS, thus allowing files to be shared among computers in >> > > different jurisdictions with different UID lists. Unfortunately, RFS >> > > went the way of Reiser paging. >> > >> > I believe RFS shipped in SVR3, at least as a package for the 3b2. >> >> Apparently. I've a book (ISBN 0-672-48440-4) with a short chapter on it within, authored by Douglas Harris. >> >> It happens to state: >> >> AT&T's approach towards UNIX System V, Release 3.0 and beyond is to provide a /Remote File System/ (RFS) that is an extension of the ordinary file system arrangement. [???] > > SunOS 4.x shipped RFS, Howard Chartok (my office mate at the time) did > the port I believe. Thankless work since Sun ran their entire campus > on NFS; RFS never got any attention. It's too bad because it did solve > some problems that NFS just punted on. NFS is Clem's law in action, > it was good enough, not great, but still won. I remember SunOS 4.x having RFS.. I never used it but I vaguely recall (probably misremembering) that there was a warning in the man page about it that it might not interoperate with /dev devices correct if the byte order of the machines was different. I seem to recall that with RFS if /dev was remoted you actually accessed the remote devices and not just the device nodes from the system that /dev was mounted to. At the AT&T site I was at we used NFS exclusively too. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From norman at oclsc.org Mon Jun 20 10:44:14 2022 From: norman at oclsc.org (Norman Wilson) Date: Sun, 19 Jun 2022 20:44:14 -0400 (EDT) Subject: [TUHS] RFS (was Re: Re: forgotten versions) Message-ID: I don't know the exact history of RFS a la System V, but I don't think it was Peter Weinberger's stuff, and it certainly wasn't his code. Nor his name: he called his first version neta and his second netb (he knew it would be changing and allowed for it in the name from the start). I don't remember us ever calling it RFS, or even remote file systems, inside 1127; we called it network file systems (never NFS because the Sun stuff existed by then). For those who don't know it, Peter's goal was quite different from that of NFS. The idea behind NFS seems always to have been to mount a remote file system as if it were local, with a base assumption early on that everything was within the same administrative domain so it was OK to make assumptions about userids matching up, and running code as super-user. Peter described his intent as `I want to be able to use your disks, and that's a lot simpler if I don't have to get you to add code to your kernel, or even to run a program as super-user.' Hence the entirely-user-mode server program, which could use super-user privileges to afford access as any user if it had them, but also worked fine when run as an ordinary user with only that user's file permissions. We did in fact normally run it as super-user so each of our 15 or so VAXes could see the file system tree on each other, but we also occasionally did it otherwise. That was one reason device files worked as they did, accessing the device on the server end rather than acting like a local special file on the client: we didn't care about running diskless clients, but we did occasionally care about accessing a remote system's tape drive. Peter, being a self-described fan of cheap hacks, also wasn't inclined to spend much time thinking about general abstractions; in effect he just turned various existing kernel subroutines (when applied to a network file system) into RPCs. The structure of the file system switch was rather UNIX-specific, reflecting that. That also means Peter's code was a bit ad-hoc and wonky in places. He cleaned it up considerably between neta and netb, and I did further cleanup later. I even had a go at a library to isolate the network protocol from the server proper, converted the netb server to use it, and made a few demo servers of my own like one to read and write raw FILES-11 file systems--useful for working with the console file system on the VAX 8800 series, which was exported to the host as a block device--and a daemon to allow a tar archive to be mounted as a read-only file system. In modern systems, you can do the same sort of things with FUSE, and set up the same I-want-to-use-your-disks (or I want to get at my own files from afar without privileges) scheme with sshfs. I would be very surprised to learn that either of those borrowed from their ancient cousins in Research UNIX; so far as I know they're independent inventions. Either way I'm glad they exist. Norman Wilson Toronto ON From ggm at algebras.org Mon Jun 20 11:02:25 2022 From: ggm at algebras.org (George Michaelson) Date: Mon, 20 Jun 2022 11:02:25 +1000 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: Message-ID: I probably tend too much to the sociological more than technical in this, but I do know we were very prejudiced against RFS. Very. We just didn't like it. Probably? This was some kind of bizarre chauvinism about NFS and Suns, and access to the systems. But, my memory is that RFS as available to us was not very easy to deploy, and may have been by default a "hard" mount just at the point we were finding the benefit of having a "soft" mount, in experience. Sure, a soft mount is significantly less reliable, but with unreliable campus networks and systems, half a usable system is better than none at all, and RFS tended to 'all or nothing' This was during the window where Sun was breaking the UDP checksum, and also Ethernet backoff. So, its UDP checksum gaming meant it was "faster" because it did less work, and its Ethernet backoff gaming meant it was always first-to-fire in a contention event on the cable. Two cheats! I don't think RFS did either, so there was also specious benchmarking coming into effect, -G From arnold at skeeve.com Mon Jun 20 14:50:01 2022 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 19 Jun 2022 22:50:01 -0600 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: Message-ID: <202206200450.25K4o1Vv023015@freefriends.org> RFS was not Peter's code. It was stateful, which had advantages and disadvantages. And, as someone else mentioned, if two systems were binary compatible, remote device access worked, including ioctls. IIRC Steve Rago ported Peter's code to System V Release 4, and publilshed a paper about it, but I don't think the code ever escaped AT&T. System V Release 4 sorta overdid the special file system thing; as I recall /dev/fd was done as a file system! FWIW, Arnold norman at oclsc.org (Norman Wilson) wrote: > I don't know the exact history of RFS a la System V, but I > don't think it was Peter Weinberger's stuff, and it certainly > wasn't his code. Nor his name: he called his first version > neta and his second netb (he knew it would be changing and > allowed for it in the name from the start). > > I don't remember us ever calling it RFS, or even remote > file systems, inside 1127; we called it network file systems > (never NFS because the Sun stuff existed by then). > > For those who don't know it, Peter's goal was quite different > from that of NFS. The idea behind NFS seems always to have > been to mount a remote file system as if it were local, with > a base assumption early on that everything was within the > same administrative domain so it was OK to make assumptions > about userids matching up, and running code as super-user. > Peter described his intent as `I want to be able to use your > disks, and that's a lot simpler if I don't have to get you > to add code to your kernel, or even to run a program as > super-user.' Hence the entirely-user-mode server program, > which could use super-user privileges to afford access as > any user if it had them, but also worked fine when run as > an ordinary user with only that user's file permissions. > We did in fact normally run it as super-user so each of > our 15 or so VAXes could see the file system tree on each > other, but we also occasionally did it otherwise. > > That was one reason device files worked as they did, accessing > the device on the server end rather than acting like a local > special file on the client: we didn't care about running > diskless clients, but we did occasionally care about accessing > a remote system's tape drive. > > Peter, being a self-described fan of cheap hacks, also wasn't > inclined to spend much time thinking about general abstractions; > in effect he just turned various existing kernel subroutines > (when applied to a network file system) into RPCs. The > structure of the file system switch was rather UNIX-specific, > reflecting that. > > That also means Peter's code was a bit ad-hoc and wonky in > places. He cleaned it up considerably between neta and netb, > and I did further cleanup later. I even had a go at a library > to isolate the network protocol from the server proper, converted > the netb server to use it, and made a few demo servers of my own > like one to read and write raw FILES-11 file systems--useful for > working with the console file system on the VAX 8800 series, > which was exported to the host as a block device--and a daemon > to allow a tar archive to be mounted as a read-only file system. > > In modern systems, you can do the same sort of things with FUSE, > and set up the same I-want-to-use-your-disks (or I want to get > at my own files from afar without privileges) scheme with sshfs. > I would be very surprised to learn that either of those borrowed > from their ancient cousins in Research UNIX; so far as I know > they're independent inventions. Either way I'm glad they exist. > > Norman Wilson > Toronto ON From tuhs at tuhs.org Mon Jun 20 15:03:46 2022 From: tuhs at tuhs.org (Arno Griffioen via TUHS) Date: Mon, 20 Jun 2022 07:03:46 +0200 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: <20220619230720.GJ26016@mcvoy.com> Message-ID: On Sun, Jun 19, 2022 at 07:19:31PM -0400, Brad Spencer wrote: > order of the machines was different. I seem to recall that with RFS if > /dev was remoted you actually accessed the remote devices and not just > the device nodes from the system that /dev was mounted to. At the AT&T > site I was at we used NFS exclusively too. Yup.. I used RFS on variuous SVR3 and SVR4 platforms back in the days, usually for this purpose. Eg. to provide a simple way of giving 'workstation' users access to modem-banks attached to central servers. It worked fine as long as the platforms were pretty similar (eg. all i386 based), but could indeed get 'interesting' once you added bits in the mix that were based on other CPUs. For me RFS came along 'before its time' as by design it could not handle things like creating diskless or dataless workstations easily, exactly because of the more fine-grained, file oriented, setup and that's where NFS did it's thing. The features RFS brought did, unfortunately, not seem as useful at the time for general applications as things like broadly sharing boot and/or home/staff environments was 'the thing' needed for a long time and NFS did that very (too ;) ) easily. However.. I do see it more like the UNIX 'grandad' for things we now have like SMB and cloud sync/share 'filesystem' tools which operate much more on a style of access and granularity like RFS did. I always wondered if the Mircrosoft engineers that worked on the initial SMB implementations looked at RFS for ideas. Bye, Arno. From tytso at mit.edu Mon Jun 20 16:53:22 2022 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 20 Jun 2022 02:53:22 -0400 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: References: <20220619230720.GJ26016@mcvoy.com> Message-ID: I'll note there was another RFS that was posted to net.sources and net.unix-wizards by Todd Brunhoff in January 1986. This was completely different from the AT&T System V's; Todd's RFS was done as part of his Master's Degree at the University of Denver, and it was heavily dependant on BSD 4.2/4.3's sockets interface. For more information, see: https://groups.google.com/g/net.unix-wizards/c/QwRVsZS9jEM/m/V4ZI64CKopsJ?pli=1 We used this version of RFS at MIT Project Athena for a while before switching to AFS, and it's mentioned in Professor Saltzer's Athena Technical Plan, in the section entitled, "The Athena File Storage Model": https://web.mit.edu/saltzer/www/publications/athenaplan/c.6.pdf Project Athena integrated MIT Kerberos (Version 4) into both NFS and RFS, and of course AFS used Kerberos for its authentication tokens. - Ted From tuhs at tuhs.org Mon Jun 20 22:28:06 2022 From: tuhs at tuhs.org (Paul Ruizendaal via TUHS) Date: Mon, 20 Jun 2022 14:28:06 +0200 Subject: [TUHS] RFS (was Re: Re: forgotten versions) Message-ID: <94902095-8F4D-4350-8B2E-480E752515ED@planet.nl> > I don't know the exact history of RFS a la System V, but I > don't think it was Peter Weinberger's stuff, and it certainly > wasn't his code. Peter’s code is available in the V8 and V9 trees on TUHS. The Sys V repositories on Github appear to include RFS code in all of R3.0, R3.1 and R3.2. At first glance, it seems quite different from the V8/V9 code. > Peter, being a self-described fan of cheap hacks, also wasn't > inclined to spend much time thinking about general abstractions; > in effect he just turned various existing kernel subroutines > (when applied to a network file system) into RPCs. The > structure of the file system switch was rather UNIX-specific, > reflecting that. Yes, well put. I’ve back ported his filesystem switch to V6/V7 and it is very light touch: on the PDP11 it added only some 500 bytes of kernel code (after some refactoring). With hindsight it seems such a logical idea, certainly in a context where the labs were experimenting with remote system calls in the mid 70’s (Heinz Lycklama's work on satellite Unix) and early 80’s (Gottfried Luderer et al. on distributed Unix — another forgotten version). It is such a powerful abstraction, but apparently very elusive to invent. Paul From dfawcus+lists-tuhs at employees.org Tue Jun 21 07:53:06 2022 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Mon, 20 Jun 2022 22:53:06 +0100 Subject: [TUHS] RFS (was Re: Re: forgotten versions) In-Reply-To: <94902095-8F4D-4350-8B2E-480E752515ED@planet.nl> References: <94902095-8F4D-4350-8B2E-480E752515ED@planet.nl> Message-ID: On Mon, Jun 20, 2022 at 02:28:06PM +0200, Paul Ruizendaal via TUHS wrote: > > > Peter, being a self-described fan of cheap hacks, also wasn't > > inclined to spend much time thinking about general abstractions; > > in effect he just turned various existing kernel subroutines > > (when applied to a network file system) into RPCs. The > > structure of the file system switch was rather UNIX-specific, > > reflecting that. > > Yes, well put. I’ve back ported his filesystem switch to V6/V7 and it is very light touch: on the PDP11 it added only some 500 bytes of kernel code (after some refactoring). > > With hindsight it seems such a logical idea, certainly in a context where the labs were experimenting with remote system calls in the mid 70’s (Heinz Lycklama's work on satellite Unix) and early 80’s (Gottfried Luderer et al. on distributed Unix — another forgotten version). It is such a powerful abstraction, but apparently very elusive to invent. Interesting, given the earlier mention of SMB. As I recall, the MS-DOS Redirector interface is sort of at a similar level, but probably a lot more messy in terms of how the internal 'interfaces' are exposed. That was in DOS 3.0, which according to Wikipedia was released in April '85, with 8th edition being around Feb '85, I guess they may have been done in parallel? DF From athornton at gmail.com Tue Jun 21 12:56:04 2022 From: athornton at gmail.com (Adam Thornton) Date: Mon, 20 Jun 2022 19:56:04 -0700 Subject: [TUHS] Tom Lyon's 3270 driver for UTS Message-ID: <49B9445A-748C-40A8-A28E-F531FD95F741@gmail.com> While I know that there are people here who like good old ed...I've been playing with UTS under VM/370. This version is from 1981 and I think it's v7. But the important thing is that Tom Lyon wrote a 3270 terminal driver, and it comes with ned, which is a screen editor that feels a lot like XEDIT--which wasn't even in CMS at that point, although EE has been added to the VM370 Community Edition I'm using. And the man pages are fullscreen as well. UTS is very, very usable because of that. This really is a wonderful terminal driver. So, thank you, Tom! Adam From lm at mcvoy.com Tue Jun 21 13:41:26 2022 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 20 Jun 2022 20:41:26 -0700 Subject: [TUHS] Tom Lyon's 3270 driver for UTS In-Reply-To: <49B9445A-748C-40A8-A28E-F531FD95F741@gmail.com> References: <49B9445A-748C-40A8-A28E-F531FD95F741@gmail.com> Message-ID: <20220621034126.GE26016@mcvoy.com> I overlapped with Pugs at Sun, knew of him, heard he was smart, but didn't really get it. Then I saw the way he exposed the iommu to user space (the bang for the buck was for VMs but researchers that were doing user space networking also used it), that hit my radar screen as really good work. He's had a long career of doing useful stuff, pretty much everything he's touched has gotten better. On Mon, Jun 20, 2022 at 07:56:04PM -0700, Adam Thornton wrote: > While I know that there are people here who like good old ed...I've been playing with UTS under VM/370. This version is from 1981 and I think it's v7. But the important thing is that Tom Lyon wrote a 3270 terminal driver, and it comes with ned, which is a screen editor that feels a lot like XEDIT--which wasn't even in CMS at that point, although EE has been added to the VM370 Community Edition I'm using. And the man pages are fullscreen as well. > > UTS is very, very usable because of that. This really is a wonderful terminal driver. > > So, thank you, Tom! > > Adam -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From tuhs at tuhs.org Wed Jun 22 00:18:33 2022 From: tuhs at tuhs.org (Tom Lyon via TUHS) Date: Tue, 21 Jun 2022 07:18:33 -0700 Subject: [TUHS] Tom Lyon's 3270 driver for UTS In-Reply-To: <49B9445A-748C-40A8-A28E-F531FD95F741@gmail.com> References: <49B9445A-748C-40A8-A28E-F531FD95F741@gmail.com> Message-ID: Thanks, Adam. I've always been proud of that driver. Further credit - 'ned' was written by Dan Walsh (not the Redhat one), and, like vi, it has 'ed' buried in it so all the familiar stuff just works. It's possible to use UTS with ASCII terminals, but the hardware only supports half-duplex, buffered, mode! I suspect it'd be possible to hack the 270x emulator in Hercules to make it appear really full-duplex - then one could add the real 'vi', etc. But that would not be faithful emulation. On Mon, Jun 20, 2022 at 7:56 PM Adam Thornton wrote: > While I know that there are people here who like good old ed...I've been > playing with UTS under VM/370. This version is from 1981 and I think it's > v7. But the important thing is that Tom Lyon wrote a 3270 terminal driver, > and it comes with ned, which is a screen editor that feels a lot like > XEDIT--which wasn't even in CMS at that point, although EE has been added > to the VM370 Community Edition I'm using. And the man pages are fullscreen > as well. > > UTS is very, very usable because of that. This really is a wonderful > terminal driver. > > So, thank you, Tom! > > Adam -- - Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Sun Jun 19 05:55:11 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sat, 18 Jun 2022 21:55:11 +0200 Subject: [TUHS] forgotten versions Message-ID: <83A7C50A-8717-460F-A632-C7A5D8FA237B@planet.nl> For those interested in a quick feel for V8 and early SysV, I recommend the excellent unix50 stuff: SSH to unix50: "ssh unix50 at unix50.org” Password is “unix50” You end up in a menu with: SDF Public Access UNIX System presents ... /~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/ /~/~ H Y S T E R I C A L ~ U N I X ~ S Y S T E M S ~/~/ /~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/~/ [a] UNICS (Version Zero) PDP-7 Summer 1969 [b] First Edition UNIX PDP-11/20 November 1971 [c] Fifth Edition UNIX PDP-11/40 June 1974 [d] Sixth Edition UNIX PDP-11/45 May 1975 [e] Seventh Edition UNIX PDP-11/70 January 1979 [f] Research UNIX 8 VAX-11/750 1984 [g] AT&T UNIX System III PDP-11/70 Fall 1982 [h] AT&T UNIX System V PDP-11/70 1983 [i] AT&T UNIX System V 3b2/400 1984 [j] 4.3 BSD MicroVAX June 1986 [k] 2.11 BSD PDP-11/70 January 1992 [w] What's running now? [q] QUIT (and run away in fear!) User contributed tutorials are at https://sdf.org/?tutorials/unix50th Want persistent images? networking? more ttys? Join https://sdf.org Don’t to exit from a run, press Ctrl-E to return to sims, type 'exit', type ‘q' I just tried V8 and it still works, although the boot log suggests that an image reset may be in order. Many, many thanks to whoever is maintaining this! From moody at posixcafe.org Wed Jun 22 09:56:02 2022 From: moody at posixcafe.org (Jacob Moody) Date: Tue, 21 Jun 2022 17:56:02 -0600 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: On 6/19/22 12:38, Dan Cross wrote: > On Sun, Jun 19, 2022 at 2:33 PM Theodore Ts'o wrote: >> On Sun, Jun 19, 2022 at 10:47:23AM -0400, Kenneth Goodwin wrote: >>> Just chiming in a bit.. >>> >>> Rob, it might be interesting to old geezers like me as well as newbies >>> entering the field to get a perspective on Plan 9 and its evolution. The >>> motivations behind it. What your group was trying to accomplish, the >>> approach, pitfalls and the entire decision making process as things went >>> along. Even things that went horribly wrong and what happened etc. >> >> I'll second that. I think it would be really helpful. >> >> There was a time when I was reviewing a paper which made a bunch of >> claims about what Plan 9 was trying to accomplish and in particular >> about what the ultimate design goals for a particular component of >> Plan 9. (I won't go into further details since as far as I know, that >> paper was never published.) >> >> In any case, since I wasn't familiar with the history of Plan 9 to >> evaluate these claims, with the permission of the PC chairs, I found >> someone who had been part of the Plan 9 team, and asked them to review >> certain passages for accuracy, and they said, "Uh, no.... that's >> totally not the case. They're completely wrong." >> >> So if someone were willing to create additional write ups about >> lessons learned, or if that's too much work, maybe someone could do >> some interview for a podcast or a vlog, that would be really >> excellent. > > Agreed. A retrospective would be a very welcome addition to the canon. > > - Dan C. > > (PS: I _had_ heard of the VAX effort before, but I don't think I'd known > quite how nascent it was before it was abandoned in favor of MIPS and > 68k.) Adding my vote in for getting some sort of retrospective. I recently stumbled across the existence of datakit when going through the plan9foundation source archives. Would be curious to hear more about its involvement with plan9. From lm at mcvoy.com Wed Jun 22 10:13:11 2022 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 21 Jun 2022 17:13:11 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: Message-ID: <20220622001311.GU26016@mcvoy.com> On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: > I recently stumbled across the existence of datakit > when going through the plan9foundation source archives. > Would be curious to hear more about its involvement > with plan9. Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? He was my mentor at SGI, my memory is datakit was sort of early on in his career and then he did XTP, which nobody knows about but I believe is still used by the military. Unless the early Bell Labs datakit and the Plan 9 datakit are different things. From robpike at gmail.com Wed Jun 22 10:48:24 2022 From: robpike at gmail.com (Rob Pike) Date: Wed, 22 Jun 2022 10:48:24 +1000 Subject: [TUHS] forgotten versions In-Reply-To: <20220622001311.GU26016@mcvoy.com> References: <20220622001311.GU26016@mcvoy.com> Message-ID: Plan 9 used Datakit as its network for quite a while. The Gnot terminals had an INCON interface, a megabit (approximately) twisted pair adjunct to Datakit. I had an INCON link running over a T-1 link to my house - great excitement back in the day. (The kernel downloaded over the line and booted the machine up to the window system - there was no local disk - from power up, in 7 seconds.) NJ Bell needed to install a new nitrogen-pressurized 26-pair cable, supported by a new telephone pole, to set it up, because I had already used up all available pairs on the existing line to my house. All included at no extra cost. (You pay for the service, not its construction.) When the internet became unavoidable, we used Plan 9's import mechanism to import the single external TCP/IP interface from our gateway machine, over Datakit, to the Gnots. We did the same, but importing now over IL (an ethernet protocol built by Phil Winterbottom) when our terminals became PCs. That's how I remember it, at least, but I might have got some details wrong. I think much of this is covered in http://doc.cat-v.org/plan_9/4th_edition/papers/net/ -rob On Wed, Jun 22, 2022 at 10:13 AM Larry McVoy wrote: > On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: > > I recently stumbled across the existence of datakit > > when going through the plan9foundation source archives. > > Would be curious to hear more about its involvement > > with plan9. > > Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? > He was my mentor at SGI, my memory is datakit was sort of early on in > his career and then he did XTP, which nobody knows about but I believe > is still used by the military. > > Unless the early Bell Labs datakit and the Plan 9 datakit are different > things. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggm at algebras.org Wed Jun 22 11:55:15 2022 From: ggm at algebras.org (George Michaelson) Date: Wed, 22 Jun 2022 11:55:15 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: <20220622001311.GU26016@mcvoy.com> Message-ID: There was this persisting story that Ken got permission from somebody like CBS or Sony to have a very large amount of classical music on a 400MB drive, for research purposes. No, really: he was doing some psycho-acoustic thing comparing compressed to uncompressed for somebody, or improving on the fraunhoffer algorithms which became MP3. The point was, the rest of us had to listen to CDs and Ken had the complete works of Bach (or something) on a hard drive, which we were told he kept in the office, and played at home over a landline of some horrendously high bandwidth, un-imaginable speeds like a megabit, imagine, a MILLION of those suckers. How dare he. Thats more than the whole of queensland. I imagine the truth is much less interesting, and there was no major IPR fraud going on at the labs coding stuff as MP3 like we imagined, under the table. I imagine this would also have been a Datakit T-1. But surely that was a 1.44mbit carrier? T1 was smaller than E1 because europeans and asians learned to count to 32 not 24. -G On Wed, Jun 22, 2022 at 10:48 AM Rob Pike wrote: > > Plan 9 used Datakit as its network for quite a while. The Gnot terminals had an INCON interface, a megabit (approximately) twisted pair adjunct to Datakit. I had an INCON link running over a T-1 link to my house - great excitement back in the day. (The kernel downloaded over the line and booted the machine up to the window system - there was no local disk - from power up, in 7 seconds.) NJ Bell needed to install a new nitrogen-pressurized 26-pair cable, supported by a new telephone pole, to set it up, because I had already used up all available pairs on the existing line to my house. All included at no extra cost. (You pay for the service, not its construction.) > > When the internet became unavoidable, we used Plan 9's import mechanism to import the single external TCP/IP interface from our gateway machine, over Datakit, to the Gnots. We did the same, but importing now over IL (an ethernet protocol built by Phil Winterbottom) when our terminals became PCs. > > That's how I remember it, at least, but I might have got some details wrong. I think much of this is covered in http://doc.cat-v.org/plan_9/4th_edition/papers/net/ > > -rob > > > On Wed, Jun 22, 2022 at 10:13 AM Larry McVoy wrote: >> >> On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: >> > I recently stumbled across the existence of datakit >> > when going through the plan9foundation source archives. >> > Would be curious to hear more about its involvement >> > with plan9. >> >> Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? >> He was my mentor at SGI, my memory is datakit was sort of early on in >> his career and then he did XTP, which nobody knows about but I believe >> is still used by the military. >> >> Unless the early Bell Labs datakit and the Plan 9 datakit are different >> things. From bakul at iitbombay.org Wed Jun 22 12:10:10 2022 From: bakul at iitbombay.org (Bakul Shah) Date: Tue, 21 Jun 2022 19:10:10 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: <20220622001311.GU26016@mcvoy.com> Message-ID: <6A0E94A6-17FA-49AB-9D9A-72AE5AE628F2@iitbombay.org> 400MB is less than a CD's worth! Compressed (MP3) would reduce the space by a factor of 11 or so. > On Jun 21, 2022, at 6:55 PM, George Michaelson wrote: > > There was this persisting story that Ken got permission from somebody > like CBS or Sony to have a very large amount of classical music on a > 400MB drive, for research purposes. No, really: he was doing some > psycho-acoustic thing comparing compressed to uncompressed for > somebody, or improving on the fraunhoffer algorithms which became MP3. > The point was, the rest of us had to listen to CDs and Ken had the > complete works of Bach (or something) on a hard drive, which we were > told he kept in the office, and played at home over a landline of some > horrendously high bandwidth, un-imaginable speeds like a megabit, > imagine, a MILLION of those suckers. How dare he. Thats more than the > whole of queensland. I imagine the truth is much less interesting, and > there was no major IPR fraud going on at the labs coding stuff as MP3 > like we imagined, under the table. > > I imagine this would also have been a Datakit T-1. But surely that was > a 1.44mbit carrier? T1 was smaller than E1 because europeans and > asians learned to count to 32 not 24. > > -G > > On Wed, Jun 22, 2022 at 10:48 AM Rob Pike wrote: >> >> Plan 9 used Datakit as its network for quite a while. The Gnot terminals had an INCON interface, a megabit (approximately) twisted pair adjunct to Datakit. I had an INCON link running over a T-1 link to my house - great excitement back in the day. (The kernel downloaded over the line and booted the machine up to the window system - there was no local disk - from power up, in 7 seconds.) NJ Bell needed to install a new nitrogen-pressurized 26-pair cable, supported by a new telephone pole, to set it up, because I had already used up all available pairs on the existing line to my house. All included at no extra cost. (You pay for the service, not its construction.) >> >> When the internet became unavoidable, we used Plan 9's import mechanism to import the single external TCP/IP interface from our gateway machine, over Datakit, to the Gnots. We did the same, but importing now over IL (an ethernet protocol built by Phil Winterbottom) when our terminals became PCs. >> >> That's how I remember it, at least, but I might have got some details wrong. I think much of this is covered in http://doc.cat-v.org/plan_9/4th_edition/papers/net/ >> >> -rob >> >> >> On Wed, Jun 22, 2022 at 10:13 AM Larry McVoy wrote: >>> >>> On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: >>>> I recently stumbled across the existence of datakit >>>> when going through the plan9foundation source archives. >>>> Would be curious to hear more about its involvement >>>> with plan9. >>> >>> Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? >>> He was my mentor at SGI, my memory is datakit was sort of early on in >>> his career and then he did XTP, which nobody knows about but I believe >>> is still used by the military. >>> >>> Unless the early Bell Labs datakit and the Plan 9 datakit are different >>> things. From jon at fourwinds.com Wed Jun 22 12:14:09 2022 From: jon at fourwinds.com (Jon Steinhart) Date: Tue, 21 Jun 2022 19:14:09 -0700 Subject: [TUHS] forgotten versions In-Reply-To: References: <20220622001311.GU26016@mcvoy.com> Message-ID: <202206220214.25M2E9A81850003@darkstar.fourwinds.com> George Michaelson writes: > There was this persisting story that Ken got permission from somebody > like CBS or Sony to have a very large amount of classical music on a > 400MB drive, for research purposes. No, really: he was doing some > psycho-acoustic thing comparing compressed to uncompressed for > somebody, or improving on the fraunhoffer algorithms which became MP3. > The point was, the rest of us had to listen to CDs and Ken had the > complete works of Bach (or something) on a hard drive, which we were > told he kept in the office, and played at home over a landline of some > horrendously high bandwidth, un-imaginable speeds like a megabit, > imagine, a MILLION of those suckers. How dare he. Thats more than the > whole of queensland. I imagine the truth is much less interesting, and > there was no major IPR fraud going on at the labs coding stuff as MP3 > like we imagined, under the table. > > I imagine this would also have been a Datakit T-1. But surely that was > a 1.44mbit carrier? T1 was smaller than E1 because europeans and > asians learned to count to 32 not 24. > > -G This reminds me of a Ken story from the late '90s. I was at a conference that I won't name where Ken gave a talk about his compression work; if I remember correctly his goal was to fit all of the Billboard Top 100 songs of all time onto a single CD. He showed us the big stack of disks that he made to give to us, but then said that to his surprise the the lawyers refused to give permission. At that point he became very focused on messing with his slides while everyone got up, got in line, and took a disc. After the pile was gone Ken looked up and nonchalantly continued his talk. That might also have been the conference at which Ken showed us videos of him in a MIG. Jon From andrew at humeweb.com Wed Jun 22 12:16:16 2022 From: andrew at humeweb.com (Andrew Hume) Date: Tue, 21 Jun 2022 19:16:16 -0700 Subject: [TUHS] forgotten versions In-Reply-To: <20220622001311.GU26016@mcvoy.com> References: <20220622001311.GU26016@mcvoy.com> Message-ID: <5C595018-6D41-4D19-B99A-F7FA89D962AF@humeweb.com> i joined the labs in 1981. during that first year, i worked on S/NET and did a comparison with data kit (or a direct predecessor). > On Jun 21, 2022, at 5:13 PM, Larry McVoy wrote: > > On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: >> I recently stumbled across the existence of datakit >> when going through the plan9foundation source archives. >> Would be curious to hear more about its involvement >> with plan9. > > Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? > He was my mentor at SGI, my memory is datakit was sort of early on in > his career and then he did XTP, which nobody knows about but I believe > is still used by the military. > > Unless the early Bell Labs datakit and the Plan 9 datakit are different > things. From andrew at humeweb.com Wed Jun 22 12:19:36 2022 From: andrew at humeweb.com (Andrew Hume) Date: Tue, 21 Jun 2022 19:19:36 -0700 Subject: [TUHS] forgotten versions In-Reply-To: <202206220214.25M2E9A81850003@darkstar.fourwinds.com> References: <20220622001311.GU26016@mcvoy.com> <202206220214.25M2E9A81850003@darkstar.fourwinds.com> Message-ID: the early versions of the audio compression stuff were not quite is good as the later versions (which became apples stuff) but compressed to substantially smaller size. ken compressed 2-3 hrs or so of music for my wedding and that was rather less than a CD. > On Jun 21, 2022, at 7:14 PM, Jon Steinhart wrote: > > George Michaelson writes: >> There was this persisting story that Ken got permission from somebody >> like CBS or Sony to have a very large amount of classical music on a >> 400MB drive, for research purposes. No, really: he was doing some >> psycho-acoustic thing comparing compressed to uncompressed for >> somebody, or improving on the fraunhoffer algorithms which became MP3. >> The point was, the rest of us had to listen to CDs and Ken had the >> complete works of Bach (or something) on a hard drive, which we were >> told he kept in the office, and played at home over a landline of some >> horrendously high bandwidth, un-imaginable speeds like a megabit, >> imagine, a MILLION of those suckers. How dare he. Thats more than the >> whole of queensland. I imagine the truth is much less interesting, and >> there was no major IPR fraud going on at the labs coding stuff as MP3 >> like we imagined, under the table. >> >> I imagine this would also have been a Datakit T-1. But surely that was >> a 1.44mbit carrier? T1 was smaller than E1 because europeans and >> asians learned to count to 32 not 24. >> >> -G > > This reminds me of a Ken story from the late '90s. I was at a conference > that I won't name where Ken gave a talk about his compression work; if I > remember correctly his goal was to fit all of the Billboard Top 100 songs > of all time onto a single CD. He showed us the big stack of disks that he > made to give to us, but then said that to his surprise the the lawyers > refused to give permission. At that point he became very focused on messing > with his slides while everyone got up, got in line, and took a disc. After > the pile was gone Ken looked up and nonchalantly continued his talk. > > That might also have been the conference at which Ken showed us videos of > him in a MIG. > > Jon From brad at anduin.eldar.org Wed Jun 22 12:55:38 2022 From: brad at anduin.eldar.org (Brad Spencer) Date: Tue, 21 Jun 2022 22:55:38 -0400 Subject: [TUHS] forgotten versions In-Reply-To: <20220622001311.GU26016@mcvoy.com> (message from Larry McVoy on Tue, 21 Jun 2022 17:13:11 -0700) Message-ID: Larry McVoy writes: > On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: >> I recently stumbled across the existence of datakit >> when going through the plan9foundation source archives. >> Would be curious to hear more about its involvement >> with plan9. > > Pretty sure datakit predated Plan 9, didn't Greg Chesson work on that? > He was my mentor at SGI, my memory is datakit was sort of early on in > his career and then he did XTP, which nobody knows about but I believe > is still used by the military. > > Unless the early Bell Labs datakit and the Plan 9 datakit are different > things. When I was at AT&T in the early to mid '90s Datakit could manifest in a couple of ways. The simplest was as a tty .. that is you had a serial terminal and used a dial string with bangs in it to get somewhere. This method would have used some number of copper pairs into a RJ45 to DB25 adaptor connected to the terminal. The terminals were usually 730s or 6xx (630s maybe). They had some graphics ability, sort of, with some windowing support. The version of SVR3 we had running on the Vaxs had a program in it that would be able to take advantage of the windowing features of the 730 over a serial tty line (multiple windows, X-Windows like sort of). I have totally forgotten what that was called, however. These terminals also had a mouse with them. The second way that Datakit would show itself was as a fiber pair tied directly to a computer system. There were Datakit fiber boards for nearly all of the platforms that the group I was in used... Vax (via a companion box), Tandem, Sun Sparc, at the very least. There was, usually, a third party kernel driver required that would sometimes be a bit messy and/or have a personality to it. This set up sometimes provided a dkcu command and/or a modified cu command and you could use a bang path dial string from the system to get somewhere else using the Datakit network and act like a tty. Or, you could do native Datakit in your user land code. This is what the product I worked on did to talk to a Datakit network at the RBOCs to get access to the switches in the telco network for monitoring purposes. Often in those cases, Datakit would be transported over a X.25 network. Later, as ethernet and TCP/UDP/IP became the thing, the switches learned how to speak TCP/IP, although sometimes indirectly though a translator box in front of the switch. A number of the telco switches we talked to spoke a dialect of X.25 called BX.25 (Bell X.25 or some such) and the translator boxes would exist on the switch side and our side and basically tunnel BX.25 over TCP/IP. This was a bit different then using the Datakit network to get to the switch, but I seem to remember that some RBOCs did something where BX.25 was sent via Datakit which in the wider area network was transported over X.25 to the monitoring system. There were things called Datakit switches that was a cabinet a few feet tall and a couple of feet wide. It housed a bunch of boards of your choosing depending on how you wanted to use Datakit. Our group had their own switch at the site I was at run by the persons dealing with our development lab. There was also a number of Datakit switches for the site itself run by the general IT organization. I seem to remember those switches having hot pluggable boards with an embedded 3B system inside (although I may be misremembering that a bit). -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From robpike at gmail.com Wed Jun 22 12:58:15 2022 From: robpike at gmail.com (Rob Pike) Date: Wed, 22 Jun 2022 12:58:15 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: <20220622001311.GU26016@mcvoy.com> <202206220214.25M2E9A81850003@darkstar.fourwinds.com> Message-ID: The Plan 9 CD-ROM needed about 100MB for the full distribution, if that. We hatched a plan to fill up the rest with encoded music and include the software to decode it. (We wanted to get the encoder out too, but lawyers stood in the way. Keep reading.) Using connections I had with folks in the area, and some very helpful friends in the music business, I got permission to distribute several hours of existing recorded stuff from groups like the Residents and Wire. Lou Reed gave a couple of pieces too - he was very interested in Ken and Sean's work (which, it should be noted, was built on groundbreaking work done in the acoustics center at Bell Labs) and visited us to check it out. Debby Harry even recorded an original song for us in the studio. We had permission for all this of course, and releases from everyone involved. It was very exciting. So naturally, just before release, an asshole (I am being kind) lawyer at AT&T headquarters in Manhattan stopped the project cold. In a phone call that treated me as shabbily as I have ever been, he said he didn't know who these "assholes" (again, but this time his term) were and therefore the releases were meaningless because anyone could have written them. And that, my friends, is why MP-3 took off instead of the far better follow-on system we were on the cusp of getting out the door. -rob P.S. No, I don't have the music any more. Too sad to keep. On Wed, Jun 22, 2022 at 12:19 PM Andrew Hume wrote: > the early versions of the audio compression stuff were not quite is good as > the later versions (which became apples stuff) but compressed to > substantially > smaller size. ken compressed 2-3 hrs or so of music for my wedding and that > was rather less than a CD. > > > On Jun 21, 2022, at 7:14 PM, Jon Steinhart wrote: > > > > George Michaelson writes: > >> There was this persisting story that Ken got permission from somebody > >> like CBS or Sony to have a very large amount of classical music on a > >> 400MB drive, for research purposes. No, really: he was doing some > >> psycho-acoustic thing comparing compressed to uncompressed for > >> somebody, or improving on the fraunhoffer algorithms which became MP3. > >> The point was, the rest of us had to listen to CDs and Ken had the > >> complete works of Bach (or something) on a hard drive, which we were > >> told he kept in the office, and played at home over a landline of some > >> horrendously high bandwidth, un-imaginable speeds like a megabit, > >> imagine, a MILLION of those suckers. How dare he. Thats more than the > >> whole of queensland. I imagine the truth is much less interesting, and > >> there was no major IPR fraud going on at the labs coding stuff as MP3 > >> like we imagined, under the table. > >> > >> I imagine this would also have been a Datakit T-1. But surely that was > >> a 1.44mbit carrier? T1 was smaller than E1 because europeans and > >> asians learned to count to 32 not 24. > >> > >> -G > > > > This reminds me of a Ken story from the late '90s. I was at a conference > > that I won't name where Ken gave a talk about his compression work; if I > > remember correctly his goal was to fit all of the Billboard Top 100 songs > > of all time onto a single CD. He showed us the big stack of disks that > he > > made to give to us, but then said that to his surprise the the lawyers > > refused to give permission. At that point he became very focused on > messing > > with his slides while everyone got up, got in line, and took a disc. > After > > the pile was gone Ken looked up and nonchalantly continued his talk. > > > > That might also have been the conference at which Ken showed us videos of > > him in a MIG. > > > > Jon > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggm at algebras.org Wed Jun 22 13:09:06 2022 From: ggm at algebras.org (George Michaelson) Date: Wed, 22 Jun 2022 13:09:06 +1000 Subject: [TUHS] forgotten versions In-Reply-To: References: <20220622001311.GU26016@mcvoy.com> <202206220214.25M2E9A81850003@darkstar.fourwinds.com> Message-ID: as usual, truth is stranger than fiction, or the chinese whispers versions of this which got out. What a very sad, but all to believable story. Believable but.. if you saw this as an episode of a TV series, you'd say "nah.. this couldn't happen in real life" -G From sjenkin at canb.auug.org.au Wed Jun 22 13:25:38 2022 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Wed, 22 Jun 2022 13:25:38 +1000 Subject: [TUHS] =?utf-8?q?Early_Unix_Growth=3A_Number_of_=E2=80=9CInstall?= =?utf-8?q?ations=E2=80=9D_or_Licences=3F?= Message-ID: <52BD592D-7E76-4762-9DB1-DF53AA87CAC9@canb.auug.org.au> I’ve been wondering about the growth of Unix and if there’s any good data available. There’s the Early Unix Epoch, which probably ends with the Unix Support Group assuming the distribution role, plus providing / distributing their version of the code. Later there’s commercial Unix: System III and System V, I guess. BSD, until the lawsuit was resolved, required a Source code license, but their installation count is important in pre-Commercial Unix. Large licensees like SUN, HP & IBM (AIX) may not have published license counts for their versions - but then, were their derivatives “Unix” or something else? Warner Loch’s paper has data to around 1978 [below]. I’ve no idea where to find data for USG issued licences, or if the number of binary & source licences were ever reported in the Commercial Era by AT&T. I’ll not be the first person who’s gone down this road, but my Search Fu isn’t good enough to find them. Wondering if anyone on the list can point me at resources, even a bunch of annual reports. I don’t mind manually pulling out the data I’m interested in. But why reinvent the wheel if the work is already done? steve =============== numbers extracted from Warner Loch’s paper. 2nd Edn June 1972 10 installations 3rd Edn February 1973 16 4th Edn November 1973 >20, or 25 July 74 CACM paper "Unix Time Sharing System” after which external interest exploded 6th Edn 1975 ??? 7th Edn March 1978 600+, >300 inside Bell System, "even more have been licensed to outside users” =============== -- Steve Jenkin, 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From jnc at mercury.lcs.mit.edu Wed Jun 22 22:06:23 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 22 Jun 2022 08:06:23 -0400 (EDT) Subject: [TUHS] forgotten versions Message-ID: <20220622120623.A856A18C08D@mercury.lcs.mit.edu> > From: Paul Ruizendaal > [c] Fifth Edition UNIX PDP-11/40 June 1974 > [d] Sixth Edition UNIX PDP-11/45 May 1975 > [e] Seventh Edition UNIX PDP-11/70 January 1979 This table gives an erroneous impression of which versions supported which PDP-11 models. 4th Edition supported only the /45; 5th Edition added support for the /40; and the /70 appeared in 6th edition. Noel From douglas.mcilroy at dartmouth.edu Thu Jun 23 05:03:12 2022 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Wed, 22 Jun 2022 15:03:12 -0400 Subject: [TUHS] Sandy Fraser Message-ID: Sandy Fraser died June 13. The moving spirit behind Datakit, Sandy served as director then executive director responsible for computing science at Bell Labs in the era of v8, v9, and v10. He became VP at AT&T Shannon Labs after the split with Lucent. Doug From andrew at humeweb.com Thu Jun 23 05:06:22 2022 From: andrew at humeweb.com (Andrew Hume) Date: Wed, 22 Jun 2022 12:06:22 -0700 Subject: [TUHS] Sandy Fraser In-Reply-To: References: Message-ID: <4501B167-3B0D-4195-B963-284D1DCCA886@humeweb.com> sad to hear. sandy was a great manager. he also was a great supporter of my work at the labs. > On Jun 22, 2022, at 12:03 PM, Douglas McIlroy wrote: > > Sandy Fraser died June 13. The moving spirit behind Datakit, Sandy > served as director then executive director responsible for computing > science at Bell Labs in the era of v8, v9, and v10. He became VP at > AT&T Shannon Labs after the split with Lucent. > > Doug From crossd at gmail.com Thu Jun 23 10:02:55 2022 From: crossd at gmail.com (Dan Cross) Date: Wed, 22 Jun 2022 20:02:55 -0400 Subject: [TUHS] forgotten versions In-Reply-To: <20220622120623.A856A18C08D@mercury.lcs.mit.edu> References: <20220622120623.A856A18C08D@mercury.lcs.mit.edu> Message-ID: On Wed, Jun 22, 2022 at 8:06 AM Noel Chiappa wrote: > > From: Paul Ruizendaal > > > [c] Fifth Edition UNIX PDP-11/40 June 1974 > > [d] Sixth Edition UNIX PDP-11/45 May 1975 > > [e] Seventh Edition UNIX PDP-11/70 January 1979 > > This table gives an erroneous impression of which versions supported which > PDP-11 models. 4th Edition supported only the /45; 5th Edition added support > for the /40; and the /70 appeared in 6th edition. I believe that's actually a menu, and that selecting from it will connect you to an (emulated) machine of the given type running that version of the OS. Stephen Jones and LCM+L set that up for the Unix 50th anniversary. - Dan C. From jnc at mercury.lcs.mit.edu Thu Jun 23 12:18:58 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 22 Jun 2022 22:18:58 -0400 (EDT) Subject: [TUHS] forgotten versions Message-ID: <20220623021858.C1BD918C095@mercury.lcs.mit.edu> > From: Dan Cross > I believe that's actually a menu Hence the "erroneous _impression_" (emphasis added). I'm curious as to how they decided which models to run which editions on. Although V4 _ran_ on the /45, split I+D wasn't supported - for user or kernel - until V6. (I'm assuming a number of things - both in the kernel, and applications - started hitting the 64KB limit, which led to its support.) Speaking of split I+D, there's an interesting little mystery in V6 that at one point in time I thought involved split I+D - but now that I look closely, apparently not. The mystery involves a 'tombstone' in the V6 buf.h: #define B_RELOC 0200 /* no longer used */ I had created (in my mind) an explanation what this is all about - but now that I look, it's probably all wrong! My explanation involves the slightly odd layout of the kernel in physical memory, with split I+D; data below the code, at physical 0. This actually makes a lot of sense; it means the virtual address of any data (e.g. a buffer) is the same as its physical address (needed for DMA). It does require the oddness of 'sysfix', to invert the order of code+data in the system binary, plus odd little quirks in the assembler startup (e.g. copying the code up to make room for BSS). So I thought that B_RELOC was a hangover from a time, at the start of split I+D, when data _wasn't_ at physical 0, so a buffer's virtual and phsyical addresses differed. But that must be wrong (at least in any simple way). B_RELOC was in buf.h as of V4 - the first kernel version in C - with no split I+D. So my theory has to be wrong. However, I am unable to find any code in the V4 kernel which uses it! So unless someone who remembers the very early PDP-11 kernel can enlighten us, its purpose will always remain a mystery! Noel From imp at bsdimp.com Fri Jun 24 04:20:31 2022 From: imp at bsdimp.com (Warner Losh) Date: Thu, 23 Jun 2022 12:20:31 -0600 Subject: [TUHS] =?utf-8?q?Early_Unix_Growth=3A_Number_of_=E2=80=9CInstall?= =?utf-8?q?ations=E2=80=9D_or_Licences=3F?= In-Reply-To: <52BD592D-7E76-4762-9DB1-DF53AA87CAC9@canb.auug.org.au> References: <52BD592D-7E76-4762-9DB1-DF53AA87CAC9@canb.auug.org.au> Message-ID: On Tue, Jun 21, 2022 at 9:25 PM steve jenkin wrote: > I’ve been wondering about the growth of Unix and if there’s any good data > available. > > There’s the Early Unix Epoch, which probably ends with the Unix Support > Group assuming the distribution role, plus providing / distributing their > version of the code. > > Later there’s commercial Unix: > System III and System V, I guess. > > BSD, until the lawsuit was resolved, required a Source code license, but > their installation count is important in pre-Commercial Unix. > > Large licensees like SUN, HP & IBM (AIX) may not have published license > counts for their versions - but then, were their derivatives “Unix” or > something else? > > > Warner Loch’s paper has data to around 1978 [below]. > > I’ve no idea where to find data for USG issued licences, or if the number > of binary & source licences were ever reported in the Commercial Era by > AT&T. > > I’ll not be the first person who’s gone down this road, but my Search Fu > isn’t good enough to find them. > > Wondering if anyone on the list can point me at resources, even a bunch of > annual reports. > > I don’t mind manually pulling out the data I’m interested in. But why > reinvent the wheel if the work is already done? > > steve > > =============== > > numbers extracted from Warner Loch’s paper. > I think he spells his last name "Losh" :) > > < > https://papers.freebsd.org/2020/FOSDEM/losh-Hidden_early_history_of_Unix.files/slides.pdf > > > > 2nd Edn June 1972 10 installations > 3rd Edn February 1973 16 > 4th Edn November 1973 >20, or 25 > > July 74 CACM paper "Unix > Time Sharing System” after which external interest exploded > > 6th Edn 1975 ??? > 7th Edn March 1978 600+, >300 inside Bell > System, "even more have been licensed to outside users” > These were the numbers that I could find in contemporary documentation. 5th and 6th edition didn't have a number like the manuals up to the 4th edition. I got the 7th edition from somewhere I don't recall, but as the 6th and 7th editions were widely licensed and started having lots of users based on ports that happened, it can be hard to put numbers down. Warner > =============== > > -- > Steve Jenkin, > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Fri Jun 24 16:47:00 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Fri, 24 Jun 2022 08:47:00 +0200 Subject: [TUHS] forgotten versions Message-ID: <38ED67E4-4D75-47A8-BA44-4A08DF487EC2@planet.nl> On Tue, Jun 21, 2022 at 05:56:02PM -0600, Jacob Moody wrote: > I recently stumbled across the existence of datakit > when going through the plan9foundation source archives. > Would be curious to hear more about its involvement > with plan9. There are at least 2 versions of Datakit. I my current understanding there are “Datakit” which is the research version, and “Datakit II” which seems to be the version that was broadly deployed into the AT&T network in the late 80’s -- but very likely the story is more complicated than that. Plan9 is contemporaneous with Datakit II. In short, Sandy Fraser developed the “Spider” network in 1970-1974 and this was actively used with early Unix (at least V4, maybe earlier). Sandy was dissatisfied with Spider and used its learnings to start again. The key ideas seem to have gelled together around 1977 with the first switches being available in 1979 or so. The first deployment into the Bell system was around 1982 (initially connecting a handful of Bell sites). In 1979/1980 there were two Datakit switches, one in the office of Greg Chesson who was writing the first iteration of its control software, and one in the office/lab of Gottfried Luderer et al., who used it to develop a distributed Unix. Datakit at this time is well described in two papers that the ACM recently moved from behind its paywall: https://dl.acm.org/doi/pdf/10.1145/1013879.802670 (mostly about 1980 Datakit) https://dl.acm.org/doi/pdf/10.1145/800216.806604 (mostly about distributed Unix) The Chesson control software was replaced by new code written by Lee McMahon around 1981 (note: this is still Datakit 1). The Datakit driver code in V8 is designed to work with this revised Datakit. Three aspects of Datakit show through in the design the V8-V10 networking code: - a separation in control words and data words (this e.g. comes back in ‘streams') - it works with virtual circuits; a connection is expensive to set up (‘dial’), but cheap to use - it does not guarantee reliable packet delivery, but it does guarantee in-order delivery Probably you will see echoes of this in early Plan9 network code, but I have not studied that. From ality at pbrane.org Sun Jun 26 05:16:59 2022 From: ality at pbrane.org (Anthony Martin) Date: Sat, 25 Jun 2022 12:16:59 -0700 Subject: [TUHS] forgotten versions In-Reply-To: <38ED67E4-4D75-47A8-BA44-4A08DF487EC2@planet.nl> References: <38ED67E4-4D75-47A8-BA44-4A08DF487EC2@planet.nl> Message-ID: The following papers are a good overview of Datakit and its predecessors. A. Fraser, "Towards a Universal Data Transport System," in IEEE Journal on Selected Areas in Communications, vol. 1, no. 5, pp. 803-816, November 1983, doi: 10.1109/JSAC.1983.1145998. A. G. Fraser, "Early experiments with asynchronous time division networks," in IEEE Network, vol. 7, no. 1, pp. 12-26, Jan. 1993, doi:10.1109/65.193084. The latter mentions Plan 9 but only in passing. Paul Ruizendaal once said: > Probably you will see echoes of this in early Plan9 network code, but I have not studied that. As someone how has studied Plan 9 extensively, though with no insider knowledge, it's definitely noticeable. "In the aftermath, perhaps the most valuable effect of dealing with Datakit was to encourage the generalized and flexible approach to networking begun in 8th edition Unix that is carried forward into Plan 9." - dmr (2004) Cheers, Anthony From pnr at planet.nl Sun Jun 26 06:45:54 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sat, 25 Jun 2022 22:45:54 +0200 Subject: [TUHS] forgotten versions In-Reply-To: References: <38ED67E4-4D75-47A8-BA44-4A08DF487EC2@planet.nl> Message-ID: <249B533B-E8D4-462C-8A6D-16A198BA055D@planet.nl> > On 25 Jun 2022, at 21:16, Anthony Martin wrote: > > The following papers are a good overview of Datakit and its > predecessors. > > A. Fraser, "Towards a Universal Data Transport System," in IEEE > Journal on Selected Areas in Communications, vol. 1, no. 5, pp. > 803-816, November 1983, doi: 10.1109/JSAC.1983.1145998. > > A. G. Fraser, "Early experiments with asynchronous time division > networks," in IEEE Network, vol. 7, no. 1, pp. 12-26, Jan. 1993, > doi:10.1109/65.193084. > > The latter mentions Plan 9 but only in passing. Yes, those are great papers - unfortunately behind a paywall. There is a great 1994 video on Youtube by Sandy Fraser himself that more or less follows the 1993 paper: https://www.youtube.com/watch?v=ojRtJ1U6Qzw As Doug mentioned on this list, Sandy Fraser passed away earlier this month. In the past years I’ve worked on understanding (early) Datakit and Sandy Fraser and his wife were most kind with assistance looking for papers. I’ve also benefitted from the input of Bill Marshall and of course Doug McIlroy. I’ll share my summary of Research Datakit in a separate post. Paul > Paul Ruizendaal once said: >> Probably you will see echoes of this in early Plan9 network code, but I have not studied that. > > As someone how has studied Plan 9 extensively, though with no insider > knowledge, it's definitely noticeable. > > "In the aftermath, perhaps the most valuable effect of dealing with > Datakit was to encourage the generalized and flexible approach to > networking begun in 8th edition Unix that is carried forward into Plan > 9." - dmr (2004) > > Cheers, > Anthony From pnr at planet.nl Sun Jun 26 09:01:07 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sun, 26 Jun 2022 01:01:07 +0200 Subject: [TUHS] Research Datakit notes Message-ID: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Wanted to post my notes as plain text, but the bullets / sub-bullets get lost. Here is a 2 page PDF with my notes on Research Datakit: https://www.jslite.net/notes/rdk.pdf The main takeaway is that connection build-up and tear-down is considerably more expensive than with TCP. The first cost is in the network, which builds up a dedicated path for each connection. Bandwidth is not allocated/reserved, but a path is and routing information is set up at each hop. The other cost is in the relatively verbose switch-host communication in this phase. This compares to the 3 packets exchanged at the hosts’ driver level to set up a TCP connection, with no permanent resources consumed in the network. In compensation, the cost to use a connection is considerably lower: the routing is known and the host-host link protocol (“URP") can be light-weight, as the network guarantees in-order delivery without duplicates but packets may be corrupted or lost (i.e. as if the connection is a phone line with a modem). No need to deal with packet fragmentation, stream reassembly and congestion storms as in the TCP of the early 80’s. Doing UDP traffic to a fixed remote host is easily mapped to using URP with no error correction and no flow control. Doing UDP where the remote host is different all the time is not practical on a Datakit network (i.e. a virtual circuit would be set up anyway). A secondary takeaway is that Research Datakit eventually settled on a three-level ascii namespace: “area/trunk/switch”. On each switch, the hosts would be known by name, and each connection request had a service name as parameter. In an alternate reality we would maybe have used “ca/stclara/mtnview!google!www” to do a search. From lm at mcvoy.com Sun Jun 26 09:09:39 2022 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 25 Jun 2022 16:09:39 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Message-ID: <20220625230939.GG19404@mcvoy.com> Nice. Any chance you want to do a TCP/XTP comparison? On Sun, Jun 26, 2022 at 01:01:07AM +0200, Paul Ruizendaal wrote: > Wanted to post my notes as plain text, but the bullets / sub-bullets get lost. > > Here is a 2 page PDF with my notes on Research Datakit: > > https://www.jslite.net/notes/rdk.pdf > > The main takeaway is that connection build-up and tear-down is considerably more expensive than with TCP. The first cost is in the network, which builds up a dedicated path for each connection. Bandwidth is not allocated/reserved, but a path is and routing information is set up at each hop. The other cost is in the relatively verbose switch-host communication in this phase. This compares to the 3 packets exchanged at the hosts??? driver level to set up a TCP connection, with no permanent resources consumed in the network. > > In compensation, the cost to use a connection is considerably lower: the routing is known and the host-host link protocol (???URP") can be light-weight, as the network guarantees in-order delivery without duplicates but packets may be corrupted or lost (i.e. as if the connection is a phone line with a modem). No need to deal with packet fragmentation, stream reassembly and congestion storms as in the TCP of the early 80???s. > > Doing UDP traffic to a fixed remote host is easily mapped to using URP with no error correction and no flow control. Doing UDP where the remote host is different all the time is not practical on a Datakit network (i.e. a virtual circuit would be set up anyway). > > A secondary takeaway is that Research Datakit eventually settled on a three-level ascii namespace: ???area/trunk/switch???. On each switch, the hosts would be known by name, and each connection request had a service name as parameter. In an alternate reality we would maybe have used ???ca/stclara/mtnview!google!www??? to do a search. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From robpike at gmail.com Sun Jun 26 09:57:17 2022 From: robpike at gmail.com (Rob Pike) Date: Sun, 26 Jun 2022 09:57:17 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220625230939.GG19404@mcvoy.com> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: One of the things we liked about Datakit was that the computer didn't have to establish the connection before it could reject the call, unlike TCP/IP where all validation happens after the connection is made. This is also why sockets and Datakit never worked together; sockets pretty much assume Ethernet-like connection rules. I am not a networking expert, but to me in this regard at least Datakit seemed like a prettier picture. I suppose you can DOS-attack the network, but not the machines. Datakit had other issues, for sure, like the expensive racks of hardware, but then that's because, for better and worse, it was designed by phone engineers rather than.... however you'd characterize Ethernet and its original "I scream while listening to your whisper", 5V into 50Ω Schmidt-triggered craziness. Ethernet's come a long way, but the engineering of the original Radio Shack parts was not favored by the Bell Labs crowd. -rob On Sun, Jun 26, 2022 at 9:09 AM Larry McVoy wrote: > Nice. Any chance you want to do a TCP/XTP comparison? > > On Sun, Jun 26, 2022 at 01:01:07AM +0200, Paul Ruizendaal wrote: > > Wanted to post my notes as plain text, but the bullets / sub-bullets get > lost. > > > > Here is a 2 page PDF with my notes on Research Datakit: > > > > https://www.jslite.net/notes/rdk.pdf > > > > The main takeaway is that connection build-up and tear-down is > considerably more expensive than with TCP. The first cost is in the > network, which builds up a dedicated path for each connection. Bandwidth is > not allocated/reserved, but a path is and routing information is set up at > each hop. The other cost is in the relatively verbose switch-host > communication in this phase. This compares to the 3 packets exchanged at > the hosts??? driver level to set up a TCP connection, with no permanent > resources consumed in the network. > > > > In compensation, the cost to use a connection is considerably lower: the > routing is known and the host-host link protocol (???URP") can be > light-weight, as the network guarantees in-order delivery without > duplicates but packets may be corrupted or lost (i.e. as if the connection > is a phone line with a modem). No need to deal with packet fragmentation, > stream reassembly and congestion storms as in the TCP of the early 80???s. > > > > Doing UDP traffic to a fixed remote host is easily mapped to using URP > with no error correction and no flow control. Doing UDP where the remote > host is different all the time is not practical on a Datakit network (i.e. > a virtual circuit would be set up anyway). > > > > A secondary takeaway is that Research Datakit eventually settled on a > three-level ascii namespace: ???area/trunk/switch???. On each switch, the > hosts would be known by name, and each connection request had a service > name as parameter. In an alternate reality we would maybe have used > ???ca/stclara/mtnview!google!www??? to do a search. > > -- > --- > Larry McVoy Retired to fishing > http://www.mcvoy.com/lm/boat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Sun Jun 26 11:17:27 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sun, 26 Jun 2022 03:17:27 +0200 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: <1420AC1D-8AD7-43C7-8F8B-22E1708846EF@planet.nl> > On 26 Jun 2022, at 01:57, Rob Pike wrote: > > One of the things we liked about Datakit was that the computer didn't have to establish the connection before it could reject the call, unlike TCP/IP where all validation happens after the connection is made. This is also why sockets and Datakit never worked together; sockets pretty much assume Ethernet-like connection rules. > > I am not a networking expert, but to me in this regard at least Datakit seemed like a prettier picture. I suppose you can DOS-attack the network, but not the machines. Datakit had other issues, for sure, like the expensive racks of hardware, but then that's because, for better and worse, it was designed by phone engineers rather than.... however you'd characterize Ethernet and its original "I scream while listening to your whisper", 5V into 50Ω Schmidt-triggered craziness. Ethernet's come a long way, but the engineering of the original Radio Shack parts was not favored by the Bell Labs crowd. I was not putting Datakit down, just trying to explain why the V8 approach to networking may seem a little odd from a 1980’s TCP/IP perspective, but makes perfect sense from a Datakit perspective. In the end technology often becomes a hybrid of various solutions, and maybe in this case as well. By coincidence there was a post in the Internet History mailing list earlier today that appears to make this point. In his video (https://www.youtube.com/watch?v=ojRtJ1U6Qzw), Sandy explains why he became dissatisfied with Spider and the main reason was that doing switching/routing on a mini computer was just plain inefficient as compared to a telephone switch (at 37:06). This was 1972. The result was a new design, Datakit, that could route/switch packets at high speed and in parallel. On the internet history list, someone quipped: "Yeah, back then the joke was that McQuillan was the only one making money from ATM. :-) That did change in a big way (for a while) in the late 90s and early 2000s, before router silicon caught up." To this Craig Partridge responded: "Wasn't just router silicon -- it was router design. What made ATM appealing is that it made the inside of the router or switch parallel, which was necessary to push into multigigabit rates. Folks had to figure out how to rework an Internet router to be parallel and it took at least two major innovations: fully-standalone forwarding tables with associating forwarding engines and breaking packets apart (essentially into cells), squirting those parts through the parallel backplane, and then reassembling the packet at the outbound interface for transmission." This was around 2000. It is not my field of expertise, but it would seem to me that Sandy had figured out a core problem some 30 years before the TCP/IP world would come up with a similar solution. I would not even be surprised if I learned that modern telco routers transparantly set up virtual circuits for tcp traffic. From ality at pbrane.org Sun Jun 26 11:41:25 2022 From: ality at pbrane.org (Anthony Martin) Date: Sat, 25 Jun 2022 18:41:25 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Message-ID: Paul Ruizendaal once said: > Wanted to post my notes as plain text, but the bullets / sub-bullets get lost. > > Here is a 2 page PDF with my notes on Research Datakit: > > https://www.jslite.net/notes/rdk.pdf Really nice outline. I felt some déjà vu when opening the document: I remember reading your notes in April of 2020. Would you happen to know where I can find copies of these three papers? A. G. Fraser, "Datakit - A Modular Network for Synchronous and Asynchronous Traffic", Proc. ICC 79, June 1979, Boston, Ma., pp.20.1.1-20.1.3 G. L. Chesson, "Datakit Software Architecture", Proc. ICC 79, June 1979, Boston Ma., pp.20.2.1-20.2.5 G. L. Chesson and A. G. Fraser, "Datakit Network Architecture," Proc. Compcon 80, February 1980, San Fransisco CA., pp.59-61 > A secondary takeaway is that Research Datakit eventually settled on a > three-level ascii namespace: “area/trunk/switch”. On each switch, the > hosts would be known by name, and each connection request had a > service name as parameter. In an alternate reality we would maybe have > used “ca/stclara/mtnview!google!www” to do a search. To connect to one of the Plan 9 cpu servers at Bell Labs, you would dial "nj/astro/helix!9fs". I do wonder how the relative hierarchical naming would have evolved to encompass the entire Internet. Would it have been more like "com/google/search!http"? Who knows? ☺ Thanks, Anthony From jnc at mercury.lcs.mit.edu Sun Jun 26 12:19:55 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 25 Jun 2022 22:19:55 -0400 (EDT) Subject: [TUHS] Research Datakit notes Message-ID: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> > From: Paul Ruizendaal > it would seem to me that Sandy had figured out a core problem some 30 > years before the TCP/IP world would come up with a similar solution. I > would not even be surprised if I learned that modern telco routers > transparantly set up virtual circuits for tcp traffic. To fully explore this topic would take a book, which I don't have the energy to write, and nobody would bother to read, but... Anyway, I'm not upon the latest and greatest high-speed routers: I saw some stuff from one major vendor under NDA about a decade ago, but that's my most recent - but at that point there was nothing that looked even _vaguely_ like virtual circuits. (The stuff Craig was alluding to was just about connectivity for getting bitts from _interface_ to _interface_ - if you don't have a giant crossbar - which is going to require buffering on each input anyway - how exactly do you get bits from board A to board Q - a single shared bus isn't going to do it...) A problem with anything like VC's in core switches is the growth of per-VC state - a major high-speed node will have packets from _millions_ of TCP connections flowing through it at any time. In the late-80's/early-90's - well over 30 years ago - I came up with an advanced routing architecture called Nimrod (see RFC-1992, "The Nimrod Routing Architecture"; RFC-1753 may be of interest too); it had things called 'flows' which were half way between pure datagrams (i.e. no setup - you just stick the right destination address in the header and send it off) and VCs (read the RFCs if you want to kow why), and it went to a lot of trouble to allow flow aggregation in traffic going to core switches _precisely_ to limit the growth of state in core switches, which would have traffic from millions of connections going through them. I have barely begun to even scratch the surface, here. Noel From sjenkin at canb.auug.org.au Sun Jun 26 19:46:40 2022 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sun, 26 Jun 2022 19:46:40 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> Message-ID: <1C6AFC69-F616-421A-B7BA-376ACDC295BA@canb.auug.org.au> > On 26 Jun 2022, at 12:19, Noel Chiappa wrote: > >> From: Paul Ruizendaal > >> it would seem to me that Sandy had figured out a core problem some 30 >> years before the TCP/IP world would come up with a similar solution. I >> would not even be surprised if I learned that modern telco routers >> transparantly set up virtual circuits for tcp traffic. > > To fully explore this topic would take a book, which I don't have the energy > to write, and nobody would bother to read, but... packet switching won over Virtual Circuits in the now distant past but in small, local and un-congested networks without reliability constraints, any solution can look good. If “Performance is Hard”, so are Manageability & Reliability. Packet switching hasn’t scaled well to Global size, at least IMHO. Ethernet only became a viable LAN technology with advent of Twisted pair: point to point + Switches. The One Big Idea for common channels, collision detection/avoidance, became moot, even irrelevant with switch buffering & collision-less connectivity. Watching networking types deal with ethernet multi-path routing & link failover - quite problematic for 802.1 layer-2 nets, I can’t but think “If you don’t design a feature in, why expect it to show up?”. All those voluminous CCITT / ITU-T standards were solving the problem “keep the network up & working”, they knew it wasn’t easy ‘at scale’. Harder if you have to billing :) Something I’ve never understood is how to design large-scale VM farms , where DB’s, Filestores & ’services’ (eg Web) have to be migrated between physical hosts, made more difficult when services have to be split out of existing instances: who gets to keep the MAC & IP address? I can’t see either a layer-2 or IP addressing scheme that’s simple and works well / reliably for large VM fleets. Whereas, ‘well known’ Virtual Circuit endpoints can be permanently allocated & tied to an instance/service. Packet encapsulation & separation techniques, like MPLS, VPN’s and VLAN’s, are now commonplace in IP networks. Even the ubiquitous (multi-level) use of NAT for traffic isolation & preventing unwanted ingress shows, IMO, a design failure. Why would I advertise my internal network with IPv6 and allow every bad actor to probe my entire network? Hellish security admin for little benefit. I’ve never understood this reasoning. It seems to me, IP networks are trying to provide the functionality of Virtual Circuits - dedicated paths, transparent failover & even committed data rates. There are two very valuable functions that ‘dedicated paths’, implying topology-aware switches, can do that packet-switch can’t: - Network-based “Call Transfer” There’s no packet-switch mechanism I know, that allows a connection to an advertised ‘gateway’ or authenticator, that then allows invisible network rerouting of the connection to a ’service provider’ or host, without giving away information to the caller. This would be really handy for “follow me” connections, moving between user devices, even different access networks. Apple have just announced something like this - moving running sessions between Mac, iPad, iPhone as needed. The inverse, “blocked caller id”, can’t be done in IP without a repeating host, ence VPN’s. [ In telephony, all A & B parties are identified for billing & forensics / interception. Addrs only hidden from other users. ] - “Multicast” / Conference calls Packet switched ‘conferencing’ is rarely done by the Network, despite multicast IP standards 25yrs of more old, they are rarely used. If major router vendors had good solutions, they’d be pushing them. The Pandemic brought forward the need for video calling everywhere, created a market for “Conferencing” products & “Streaming”. Isn’t Multicast of Free To Air TV & radio over Internet both obvious & desirable? It should be the modern equivalent of & replacement for ‘wireless’ broadcasting. Should be trivial if everyone has one or more mobile devices and a home Internet connection. The current Point-Point transmission model of Streaming Services - no scheduled broadcasts, On-demand only - seems to be the most resource intensive, hence costly, solution possible. Flash storage is under $0.20/GB, a small PVR with 1TB of “circular buffer” could cheaply convert “Scheduled” to “On Demand” at the customer premises. [ Samsung have announced a 1.5TB micro-SD card, targeted at surveillance recording. Purportedly, “100 days of video”. Cost? NFI ] One of the modern challenges for Enterprise Customers is “change network provider”. One major Aussie firm just took two years to move Service Provider and it was ’news’. 25yrs ago, and probably now as well, interconnecting / splitting corporate networks following merges / de-merges was a huge task. IP networks require re-numbering and basic IP services, like DNS, email, web, need re-provisioning / re-platforming. The Worst Case error - happens too often - is Network Admins cutting themselves off from the Remote Admin Network. ‘Recovery’ is slow & painful - all sites / all devices may have to be visited, even if you have a backup network link. Did Microsoft have a fault like this in the last 12 mths after it acquired a vendor? There was a big outage. Post merger, Enterprise phone switch & voice network are relatively simple & fast to change and rarely result in Admins losing Remote Admin access. It’s not that I don’t like cheap, easy Ethernet devices that mostly Just Work and at small scale are simple to setup and easy to manage. 30 years of hardware development has been a boon to us all, in almost all ways. Rob Pike wrote a great piece (in the 1980’s) about cheap hardware forcing systems software to take up the slack, an example I thought of Ashby’s Law of Requite Variety. Have we arrived in this Packet Switch nightmare because of "Microsoft is Best” thinking? I don’t have a proper name for this blinkered view of the world, backed by arrogance & the assumption “What I don’t know can’t matter”. regards steve j -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From ralph at inputplus.co.uk Sun Jun 26 19:52:41 2022 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Sun, 26 Jun 2022 10:52:41 +0100 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Message-ID: <20220626095241.3303921A1F@orac.inputplus.co.uk> Hi Anthony, > Would you happen to know where I can find copies of these three > papers? > > A. G. Fraser, "Datakit - A Modular Network for Synchronous and > Asynchronous Traffic", Proc. ICC 79, June 1979, Boston, Ma., > pp.20.1.1-20.1.3 > > G. L. Chesson, "Datakit Software Architecture", Proc. ICC 79, June > 1979, Boston Ma., pp.20.2.1-20.2.5 > > G. L. Chesson and A. G. Fraser, "Datakit Network Architecture," Proc. > Compcon 80, February 1980, San Fransisco CA., pp.59-61 I had no luck looking for those. I did find other Datakit ones which may interest the list; all Bell Labs. - A Virtual Circuit Switch as the Basis for Distributed Systems. Luderer, Che, Marshall, Bell Labs, Murray Hill. ~1981. ‘...we have implemented... a new distributed operating system derived from Unix’. https://dl.acm.org/doi/pdf/10.1145/1013879.802670 - Methods for routing, performance management, and service problem relief in Datakit networks. Fendick, Harshavardhana, Jidarian, Bell Labs, Holmdel. 1991. https://libgen.rocks/get.php?md5=27c34fe1fce3ed20b79aaf65362cb5f9&key=VXFE8W3J24GAYURB&doi=10.1109/icc.1991.162268 - FIBERKIT: a Datakit compatible 500 megabit virtual circuit switch. Follett, Levy, Sobin, Tourgee, AT&T Bell Labs. 1988. https://libgen.rocks/get.php?md5=577344ef77634022868f52969dbda62b&key=COCJ9PQZSNIJ2U3B&doi=10.1109/glocom.1988.25918 -- Cheers, Ralph. From tuhs at tuhs.org Sun Jun 26 20:16:39 2022 From: tuhs at tuhs.org (Paul Ruizendaal via TUHS) Date: Sun, 26 Jun 2022 12:16:39 +0200 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> Message-ID: <9B6D5FEF-608F-4A70-84D8-0D18DB34A62F@planet.nl> > On 26 Jun 2022, at 04:19, Noel Chiappa wrote: > > I have barely begun to even scratch the surface, here. I feared as much. I will have to think about the modern network as a cloud that just works, without really understanding the in-depth how and why. Will read those RFC’s, though -- thank you for pointing them out. Paul From pnr at planet.nl Sun Jun 26 21:04:39 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Sun, 26 Jun 2022 13:04:39 +0200 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Message-ID: <14E73B91-1A99-4A14-9EE2-6331A4804C27@planet.nl> > On 26 Jun 2022, at 03:41, Anthony Martin wrote: > > Really nice outline. I felt some déjà vu when opening the document: I > remember reading your notes in April of 2020. Thank you. Yes, I have not updated these notes since then. > Would you happen to know where I can find copies of these three > papers? > > A. G. Fraser, "Datakit - A Modular Network for Synchronous and > Asynchronous Traffic", Proc. ICC 79, June 1979, Boston, Ma., > pp.20.1.1-20.1.3 > > G. L. Chesson, "Datakit Software Architecture", Proc. ICC 79, June > 1979, Boston Ma., pp.20.2.1-20.2.5 > > G. L. Chesson and A. G. Fraser, "Datakit Network Architecture," Proc. > Compcon 80, February 1980, San Fransisco CA., pp.59-61 Two of these three were in Sandy Fraser’s archives: https://www.jslite.net/notes/dk1.pdf https://www.jslite.net/notes/dk2.pdf The middle one is I fear lost - unless a paper copy of the proceedings is still lurking in some university library. I think it will be mostly about the “CMC” switch control software, which is somewhat described in the Luderer paper as well. As far as I can tell, there is no surviving Unix code that was designed for the CMC version of Datakit (unless someone is sitting on a V7 tape with the Datakit drivers still included). > To connect to one of the Plan 9 cpu servers at Bell Labs, you would > dial "nj/astro/helix!9fs”. Out of interest, would you know if the switch-host interface for Datakit in the days of Plan9 worked more or less the same as in 8th Edition? Or had it much evolved by then? From cowan at ccil.org Sun Jun 26 23:07:49 2022 From: cowan at ccil.org (John Cowan) Date: Sun, 26 Jun 2022 09:07:49 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> Message-ID: On Sat, Jun 25, 2022 at 10:20 PM Noel Chiappa wrote: > it had things called 'flows' which were half way between pure > datagrams (i.e. no setup - you just stick the right destination address in > the > header and send it off) and VCs (read the RFCs if you want to kow why), > In that connection I have always admired Padlipsky's RFC 962, which exploits the existing TCP architecture to do just this. So simple, so easy, so Unixy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sun Jun 26 23:35:52 2022 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 26 Jun 2022 06:35:52 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> Message-ID: <20220626133552.GJ28639@mcvoy.com> On Sun, Jun 26, 2022 at 09:07:49AM -0400, John Cowan wrote: > On Sat, Jun 25, 2022 at 10:20 PM Noel Chiappa > wrote: > > > > it had things called 'flows' which were half way between pure > > datagrams (i.e. no setup - you just stick the right destination address in > > the > > header and send it off) and VCs (read the RFCs if you want to kow why), > > > > In that connection I have always admired Padlipsky's RFC 962, which > exploits the existing TCP architecture to do just this. So simple, so > easy, so Unixy. I knew Mike, interesting dude. His "The Elements of Networking Style" is a very fun read but also, for me, just getting to understand networking, it snapped a bunch of stuff into focus. I think you have to read it at just the right spot in your career and I did. Great little book and full of jabs like "If you know what you are doing, 3 layers are enough. If you don't, 7 aren't." From cowan at ccil.org Sun Jun 26 23:58:35 2022 From: cowan at ccil.org (John Cowan) Date: Sun, 26 Jun 2022 09:58:35 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220626133552.GJ28639@mcvoy.com> References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> <20220626133552.GJ28639@mcvoy.com> Message-ID: I'll check it out eventually. $10 on Ebay (the cheapest per bookfinder.com) is a little steep nowadays. Maybe I'll spring for a <$17 new copy. No Kindle, alas. On Sun, Jun 26, 2022 at 9:35 AM Larry McVoy wrote: > On Sun, Jun 26, 2022 at 09:07:49AM -0400, John Cowan wrote: > > On Sat, Jun 25, 2022 at 10:20 PM Noel Chiappa > > wrote: > > > > > > > it had things called 'flows' which were half way between pure > > > datagrams (i.e. no setup - you just stick the right destination > address in > > > the > > > header and send it off) and VCs (read the RFCs if you want to kow why), > > > > > > > In that connection I have always admired Padlipsky's RFC 962, which > > exploits the existing TCP architecture to do just this. So simple, so > > easy, so Unixy. > > I knew Mike, interesting dude. His "The Elements of Networking Style" is > a very fun read but also, for me, just getting to understand networking, > it snapped a bunch of stuff into focus. I think you have to read it > at just the right spot in your career and I did. Great little book and > full of jabs like "If you know what you are doing, 3 layers are enough. > If you don't, 7 aren't." > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fair at netbsd.org Mon Jun 27 06:35:41 2022 From: fair at netbsd.org (Erik Fair) Date: Sun, 26 Jun 2022 13:35:41 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: <1C6AFC69-F616-421A-B7BA-376ACDC295BA@canb.auug.org.au> References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> <1C6AFC69-F616-421A-B7BA-376ACDC295BA@canb.auug.org.au> Message-ID: > On Jun 26, 2022, at 02:46, steve jenkin wrote: > > One of the modern challenges for Enterprise Customers is “change network provider”. > One major Aussie firm just took two years to move Service Provider and it was ’news’. > > 25yrs ago, and probably now as well, interconnecting / splitting corporate networks following merges / de-merges was a huge task. > IP networks require re-numbering and basic IP services, like DNS, email, web, need re-provisioning / re-platforming. I take issue with this, and have experience to backup my view: it is not hard to change Internet Service Providers (ISP) for large corporations - I did it several times during my tenure as the “Internet guy” for Apple Computer, Inc., an ~$8bn revenue multinational corporation. You just have to plan for it properly, handle transitions gracefully (ideally, with overlap), and keep control of (do not outsource) key assets like domain names, public IP network address assignments, and services like e-mail (SMTP). Apple's primary “face” to the world in July 1988 when I arrived was a DEC VAX-11/780 running 4.3 BSD Unix. I renumbered it once, when we changed from CSNET’s X25NET (9.6Kb/s IP-over-X.25 via TELENET/SPRINTlink) to a 56Kb/s (DS0) leased line - we started using an assigned class B network: 130.43/16 - the VAX became 130.43.2.2. It retained its name as “apple.com” until it was decommissioned in the late 1990s. Apple was on CSNET primarily (where the VAX was connected), and had a separated BARRNET T1 that belonged to the A/UX group (A/UX was Apple’s version of Unix). I connected our two external “perimeter” networks , set up fail-over IP routing (initially with RIP (ugh), later deployed BGP), upgraded CSNET in California, added a second site in Cambridge, MA to CSNET, later moved Cambridge to NEARNET, replaced CSNET in California with CERFNET, helped our European offices in Zeist, NL connect to SURFnet, and our customer service division in Austin, TX to SPRINTNET (they had to forcibly (disruptively) renumber, but they screwed up by not listening to me during their network planning). We did have to clean up an internal IP address mess when I got there: lots of “picked out of the air” net numbers in use: 90/8, 92/8, 95/8, etc.; those were all renumbered into properly assigned net 17/8, which Apple still uses today. Before the WWW, “ftp.apple.com” offered up MacOS software distribution via anonymous FTP from a Mac IIcx running A/UX under my desk, and to prevent our connectivity from being overwhelmed when MacOS releases were published, I wrote an ftp-listener to restrict the number of FTP connections per peer classful network number for that server. I later installed that code at the Smithsonian Institution on a Unix machine they set up for public anonymous FTP of their digitally-scanned historical photography archive, because they had the same problem: limited connectivity, popular content. As for mergers, acquisitions, and spinoffs, I was involved in networking for Coral Software (acquired; purveyors of Macintosh Common Lisp) which became Apple Cambridge; Taligent which was an Apple/IBM joint venture (an Apple OS & software group spin-out; they were set up with a separate public class B network number and domain name from the get-go to provide for maximum future flexibility, even though they were initially in a building on the Apple Campus in Cupertino, CA and thus on the dark fiber plant, and Apple (me) was their ISP) and ultimately IBM bought out Apple; The Apple Engineering secured connection to the “Somerset Design Center” in Austin, TX (a joint venture between Apple, IBM, and Motorola (the AIM alliance) from whence the PowerPC processor came), which was tricky. I’ll grant that when I was doing this work, Internet connectivity downtime didn’t mean immediate revenues losses (e.g., from not being able to accept orders in the Apple Online store, which did not (yet) exist), but e-mail was critical to Apple’s world-wide operations, and making e-mail robust & reliable was the first job I was hired to do: fix sendmail(8), take control of DNS (CSNET controlled the apple.com zone file when I arrived), and naturally, that extended to making Internet connectivity & routing robust as well. The main problem that “modern” mergers & acquisitions face in coordinating networking is a direct result of something else I fought against in the IETF: private IP address space (RFC 1918), and Network Address Translation (NAT) - see RFC 1627. Unfortunately, I and my colleagues lost that debate, which is why one now sees double/triple NAT setups to try and make overlapping private IPv4 addressed networks talk to each other. You’ll notice that I eschewed use of private address space while at Apple, and having unique public IP address space made most things simpler - funny how following the architecture works better than fighting it. Unix is tied into all of this because it has been the platform where (most often) the first implementation of many of these protocols or hacks is written. It takes a flexible operating system to make such a wide range of applications possible. However, properly setting up the rest of the communications infrastructure (in which Unix must communicate) is important too. Erik Fair From sjenkin at canb.auug.org.au Mon Jun 27 07:53:27 2022 From: sjenkin at canb.auug.org.au (Steve Jenkin) Date: Mon, 27 Jun 2022 07:53:27 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <20220626021956.0140918C0A2@mercury.lcs.mit.edu> <1C6AFC69-F616-421A-B7BA-376ACDC295BA@canb.auug.org.au> Message-ID: Erik, Thanks for the reply & the account of your work. You seemed to do a lot less work than I saw networking teams forced into. As you say, “work with it”. I worked at the ANU for a short time. They used a Class-B, with address ranges delegated to (many) local admin, but Central IT demanded every device connect to their switch, which allowed them to monitor ports. I think they’d have a hard time renumbering, my section didn’t even have DHCP. Here’s the reference I spared the list initially. Very thin on details. The only more detailed info is a video. Australia Post's telco transformation named top IT project Billed as the largest project of its type, the project wrapped up in November last year after more than two years of work to address critical network performances challenges and high operating costs. The telecommunication transformation saw the rollout a new software-defined wide area network (SD-WAN) across 4000 sites. regards steve j > On 27 Jun 2022, at 06:35, Erik Fair wrote: > >> >> On Jun 26, 2022, at 02:46, steve jenkin wrote: >> >> One of the modern challenges for Enterprise Customers is “change network provider”. >> One major Aussie firm just took two years to move Service Provider and it was ’news’. >> >> 25yrs ago, and probably now as well, interconnecting / splitting corporate networks following merges / de-merges was a huge task. >> IP networks require re-numbering and basic IP services, like DNS, email, web, need re-provisioning / re-platforming. > > > I take issue with this, and have experience to backup my view: it is not hard to change Internet Service Providers (ISP) for large corporations - I did it several times during my tenure as the “Internet guy” for Apple Computer, Inc., an ~$8bn revenue multinational corporation. You just have to plan for it properly, handle transitions gracefully (ideally, with overlap), and keep control of (do not outsource) key assets like domain names, public IP network address assignments, and services like e-mail (SMTP). -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From jnc at mercury.lcs.mit.edu Mon Jun 27 10:43:49 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 26 Jun 2022 20:43:49 -0400 (EDT) Subject: [TUHS] Research Datakit notes Message-ID: <20220627004349.84ABA18C07B@mercury.lcs.mit.edu> Just as the topic of TUHS isn't 'how _I_ could/would build a _better_ OS', but 'history of the OS that was _actually built_' (something that many posters here seem to lose track of, to the my great irritation), so too the topic isn't 'how to build a better network' - or actually, anything network-centric. I'll make a few comments on a couple of things, though. > From: steve jenkin > packet switching won over Virtual Circuits in the now distant past but > in small, local and un-congested networks without reliability > constraints, any solution can look good. ... Packet switching > hasn't scaled well to Global size, at least IMHO. The internetworking architecture, circa 1978, has not scaled as well as would have been optimal, for a number of reasons, among them: - pure scaling effects (e.g. algorithms won't scale up; subsystems which handle several different needs will often need to be separated out at a larger scale; etc) - inherent lack of hindsight (unknown unknowns, to use Rumsfeld's phrase; some things you only learn in hindsight) - insufficiently detailed knowledge of complete requirements for a global-scale network (including O+M, eventual business model, etc) - limited personnel resources at the time (some things we _knew_ were going to be a problem we had to ignore because we didn't have people to throw at the problem, then and there) - rapid technological innovation (and nobody's crystal ball is 100% perfect) It has been possible to fix some aspects of the ca. 1978 system - e.g. the addition of DNS, which I think has worked _reasonably_ well - but in other areas, changes weren't really adequate, often because they were constrained by things like upward compatibility requirements (e.g. BGP, which, among numerous other issues, had to live with existing IP addressing). Having said all that, I think your assertion that virtual circuits would have worked better in a global-scale network is questionable. The whole point of networks which use unreliable datagrams as a fundamental building block is that by moving a lot of functionality into the edge nodes, it makes the switches a lot simpler. Contemporary core routers may be complex - but they would be much worse if the network used virtual circuits. Something I suspect you may be unaware of is that most of the people who devised the unreliable datagram approach of the internetworking architecture _had experience with an actual moderately-sized, operational virtual circuit network_ - the ARPANET. (Yes, it was basically a VC network. Look at things like RFNMs, links {the specific ARPANET mechanism referred to by this term, not the general concept}, etc.) So they _knew_ what a VC network would involve. So, think about the 'core routers' in a network which used VC's. I guess a typical core router tese days uses a couple of OC768 links. Assume an average packet size of 100 bytes (probably roughly accurate, with the bimodal distribution between data and acks). With 4 OC768's, that's 4*38.5G/800 = ~155M packets/second. I'm not sure of the average TCP connection length in packets these days, but assume it's 100 packets or so (that's a 100KB Web object). That's still roughly _1 million cicuit setups per second_. If the answer is 'oh, we'll use aggregation so core routers don't see individual connections - or their setup/tear-down' - well, the same can be done with a datagram system; that's what MPLS does. Work through the details - VCs were not preferred, for good reasons. > Ethernet only became a viable LAN technology with advent of Twisted > pair: point to point + Switches. It's really irritating that a lot of things labelled 'Ethernet' these days _aren't_ _real_ Ethernet (i.e. a common broadcast bus allocated via CSMA-CD). They use the same _packet format_ as Ethernet (especially the 48-bit globally-unique address, which can usefully be blown into things at manufacture time), but it's not Ethernet. In some cases, they also retain the host interface<->network physical interface - but the thing on the other side of the interface is totally different (such as the hub-based systems commmon now - as you indicate, it's a bunch of small datagram packet switches plugged together with point-point links). Interfaces are forever; like the screw in light-bulb. These days, it's likely an LED bulb on one side, powered by a reactor on the other - two technologies which were unforseen (and unforseeable) when the interface was defined, well over 100 years ago. Noel From kevin.bowling at kev009.com Mon Jun 27 10:57:39 2022 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Sun, 26 Jun 2022 17:57:39 -0700 Subject: [TUHS] forgotten versions In-Reply-To: <249B533B-E8D4-462C-8A6D-16A198BA055D@planet.nl> References: <38ED67E4-4D75-47A8-BA44-4A08DF487EC2@planet.nl> <249B533B-E8D4-462C-8A6D-16A198BA055D@planet.nl> Message-ID: On Sat, Jun 25, 2022 at 1:46 PM Paul Ruizendaal wrote: > > > On 25 Jun 2022, at 21:16, Anthony Martin wrote: > > > > The following papers are a good overview of Datakit and its > > predecessors. > > > > A. Fraser, "Towards a Universal Data Transport System," in IEEE > > Journal on Selected Areas in Communications, vol. 1, no. 5, pp. > > 803-816, November 1983, doi: 10.1109/JSAC.1983.1145998. > > > > A. G. Fraser, "Early experiments with asynchronous time division > > networks," in IEEE Network, vol. 7, no. 1, pp. 12-26, Jan. 1993, > > doi:10.1109/65.193084. > > > > The latter mentions Plan 9 but only in passing. > > Yes, those are great papers - unfortunately behind a paywall. > > There is a great 1994 video on Youtube by Sandy Fraser himself that more > or less follows the 1993 paper: > > https://www.youtube.com/watch?v=ojRtJ1U6Qzw > Superb. The story of an invention told through metaphors and mistakes. > As Doug mentioned on this list, Sandy Fraser passed away earlier this > month. > I was unfamiliar with Sandy prior to this thread. > In the past years I’ve worked on understanding (early) Datakit and Sandy > Fraser and his wife were most kind with assistance looking for papers. I’ve > also benefitted from the input of Bill Marshall and of course Doug McIlroy. > I’ll share my summary of Research Datakit in a separate post. > > Paul > > > Paul Ruizendaal once said: > >> Probably you will see echoes of this in early Plan9 network code, but I > have not studied that. > > > > As someone how has studied Plan 9 extensively, though with no insider > > knowledge, it's definitely noticeable. > > > > "In the aftermath, perhaps the most valuable effect of dealing with > > Datakit was to encourage the generalized and flexible approach to > > networking begun in 8th edition Unix that is carried forward into Plan > > 9." - dmr (2004) > > > > Cheers, > > Anthony > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fair at netbsd.org Mon Jun 27 13:00:53 2022 From: fair at netbsd.org (Erik Fair) Date: Sun, 26 Jun 2022 20:00:53 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220627004349.84ABA18C07B@mercury.lcs.mit.edu> References: <20220627004349.84ABA18C07B@mercury.lcs.mit.edu> Message-ID: <96EA8EF0-AC28-4710-A1C3-8D979B0CD4B0@netbsd.org> > On Jun 26, 2022, at 17:43, Noel Chiappa wrote: > > Interfaces are forever; like the screw in light-bulb. These days, it's likely > an LED bulb on one side, powered by a reactor on the other - two technologies > which were unforseen (and unforseeable) when the interface was defined, well > over 100 years ago. To be specific: the “Edison screw” (or “Edison medium base” or now E26) patented by Thomas Edison in 1881, https://en.wikipedia.org/wiki/Edison_screw It’s been said that “hardware comes & goes, but software is forever” but I like to add “software comes & goes, but protocols are forever.” We should take care in how we design/define protocols and interfaces. Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjenkin at canb.auug.org.au Mon Jun 27 13:58:04 2022 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Mon, 27 Jun 2022 13:58:04 +1000 Subject: [TUHS] Clem's Law. Message-ID: <12D158D7-41B4-41C9-BB68-7DC6C1BBB8FF@canb.auug.org.au> I thought this comment was very good. I went looking for “Clem’s Law” (presume Clem Cole) and struck out. Any hints anyone can suggest or history on the comment? steve j ========== Larry McVoy wrote Fri Sep 17 10:44:25 AEST 2021 Plan 9 is very cool but I am channeling my inner Clem, Plan 9 didn't meet Clem's law. It was never compelling enough to make the masses love it. Linux was good enough. ========== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From joshnatis0 at gmail.com Mon Jun 27 14:37:33 2022 From: joshnatis0 at gmail.com (josh) Date: Mon, 27 Jun 2022 00:37:33 -0400 Subject: [TUHS] Clem's Law. In-Reply-To: <12D158D7-41B4-41C9-BB68-7DC6C1BBB8FF@canb.auug.org.au> References: <12D158D7-41B4-41C9-BB68-7DC6C1BBB8FF@canb.auug.org.au> Message-ID: > Clem Cole wrote: > My own take on this is what I call "Cole's Law" Simple economics > always beats sophisticated architecture. Cole's Law is a bit of a better name than Clem's Law :P. Clem has invoked his law on this mailing list in various contexts, check out the archives. Josh On Sun, Jun 26, 2022 at 11:58 PM steve jenkin wrote: > I thought this comment was very good. > > I went looking for “Clem’s Law” (presume Clem Cole) and struck out. > > Any hints anyone can suggest or history on the comment? > > steve j > > ========== > > Larry McVoy wrote Fri Sep 17 10:44:25 AEST 2021 > > > Plan 9 is very cool but I am channeling my inner Clem, > Plan 9 didn't meet Clem's law. > It was never compelling enough to make the masses love it. > Linux was good enough. > > ========== > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Mon Jun 27 14:51:09 2022 From: imp at bsdimp.com (Warner Losh) Date: Sun, 26 Jun 2022 22:51:09 -0600 Subject: [TUHS] Clem's Law. In-Reply-To: References: <12D158D7-41B4-41C9-BB68-7DC6C1BBB8FF@canb.auug.org.au> Message-ID: On Sun, Jun 26, 2022, 10:39 PM josh wrote: > > Clem Cole wrote: > > My own take on this is what I call "Cole's Law" Simple economics > > always beats sophisticated architecture. > > Cole's Law is a bit of a better name than Clem's Law :P. Clem has > invoked his law on this mailing list in various contexts, check out > the archives. > I thought Cole's Law was thinly sliced cabbage. . Warner Josh > > On Sun, Jun 26, 2022 at 11:58 PM steve jenkin > wrote: > >> I thought this comment was very good. >> >> I went looking for “Clem’s Law” (presume Clem Cole) and struck out. >> >> Any hints anyone can suggest or history on the comment? >> >> steve j >> >> ========== >> >> Larry McVoy wrote Fri Sep 17 10:44:25 AEST 2021 >> >> >> Plan 9 is very cool but I am channeling my inner Clem, >> Plan 9 didn't meet Clem's law. >> It was never compelling enough to make the masses love it. >> Linux was good enough. >> >> ========== >> -- >> Steve Jenkin, IT Systems and Design >> 0412 786 915 (+61 412 786 915) >> PO Box 38, Kippax ACT 2615, AUSTRALIA >> >> mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjenkin at canb.auug.org.au Mon Jun 27 17:14:53 2022 From: sjenkin at canb.auug.org.au (Steve Jenkin) Date: Mon, 27 Jun 2022 17:14:53 +1000 Subject: [TUHS] Clem's Law. In-Reply-To: References: <12D158D7-41B4-41C9-BB68-7DC6C1BBB8FF@canb.auug.org.au> Message-ID: Thanks very much for the help. "Cole's law of economics vs. sophisticated technology” and "Cole's law that *Simple Economics always beats Sophisticated Architecture*” ——— Earliest ref I could find, Jan 2020 [TUHS] [TUHS -> moving to COFF] # and the Preprocessor with a thread in Dec 2020 [TUHS] Cole's Slaw —————— “Paper" by Rob Pike is actually a talk. Cheap PC hardware got better. Systems Software Research is Irrelevant Rob Pike Feb 21, 2000 —————— > On 27 Jun 2022, at 14:37, josh wrote: > > Cole's Law is a bit of a better name than Clem's Law :P. Clem has > invoked his law on this mailing list in various contexts, check out > the archives. > > Josh -- From jnc at mercury.lcs.mit.edu Tue Jun 28 07:40:23 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 27 Jun 2022 17:40:23 -0400 (EDT) Subject: [TUHS] Research Datakit notes Message-ID: <20220627214023.1511218C094@mercury.lcs.mit.edu> > From: Paul Ruizendaal > Will read those RFC's, though -- thank you for pointing them out. Oh, I wouldn't bother - unless you are really into routing (i.e. path selection). RFC-1992 in particular; it's got my name on it, but it was mostly written by Martha and Isidro, and I'm not entirely happy with it. E.g. CSC mode and CSS mode (roughly, strict source route and loose source route); I wasn't really sold on them, but I was too tired to argue about it. Nimrod was complicated enough without adding extra bells and whistles - and indeed, LSR and SSR are basically unused to this day in the Internet (at least, at the internet layer; MPLS does provide the ability to specify paths, which I gather is used to some degree). I guess it's an OK overview of the architecture, though. RFC-1753 is not the best overview, but it has interesting bits. E.g. 2.2 Packet Format Fields, Option 2: "The packet contains a stack of flow-ids, with the current one on the top." If this reminds you of MPLS, it should! (One can think of MPLS as Nimrod's packet-carrying subsystem, in some ways.) I guess I should mention that Nimrod covers more stuff - a lot more - than just path selection. That's because I felt that the architecture embodied in IPv4 was missing lots of things which one would need to do the internet layer 'right' in a global-scale Internet (e.g. variable length 'addresses' - for which we were forced to invent the term 'locator' because many nitwits in the IETF couldn't wrap their minds around 'addresses' which weren't in every packet header). And separation of location and identity; and the introduction of traffic aggregates as first-class objects at the internet layer. Etc, etc, etc. Nimrod's main focus was really on i) providing a path-selection system which allowed things like letting users have more input to selecting the path their traffic took (just as when one gets into a car, one gets to pick the path one's going to use), and ii) controlling the overhead of the routing. Of course, on the latter point, in the real world, people just threw resources (memory, computing power, bandwidth) at the problem. I'm kind of blown away< that there are almost 1 million routes in the DFZ these days. Boiling frogs... Noel From ggm at algebras.org Tue Jun 28 08:40:57 2022 From: ggm at algebras.org (George Michaelson) Date: Tue, 28 Jun 2022 08:40:57 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220627214023.1511218C094@mercury.lcs.mit.edu> References: <20220627214023.1511218C094@mercury.lcs.mit.edu> Message-ID: I did an analysis of the DFZ with Emile Aben at RIPE. There may be a million now, but at least half of these are TE and functionally irrelevant to 90%+ of the rest of the BGP speakers, being aimed at immediate peers only. If we renumbered, the count of real announcements becomes very much smaller, close to the count of ASN, modulo some necessary unaggegatable outcomes. Geoff has done work on this too, the ratio between noisy speakers and the stable speakers appear to be constants modulo natural growth. (Geoff says hello btw) -G On Tue, Jun 28, 2022 at 7:41 AM Noel Chiappa wrote: > > > From: Paul Ruizendaal > > > Will read those RFC's, though -- thank you for pointing them out. > > Oh, I wouldn't bother - unless you are really into routing (i.e. path > selection). > > RFC-1992 in particular; it's got my name on it, but it was mostly written by > Martha and Isidro, and I'm not entirely happy with it. E.g. CSC mode and CSS > mode (roughly, strict source route and loose source route); I wasn't really > sold on them, but I was too tired to argue about it. Nimrod was complicated > enough without adding extra bells and whistles - and indeed, LSR and SSR are > basically unused to this day in the Internet (at least, at the internet > layer; MPLS does provide the ability to specify paths, which I gather is used > to some degree). I guess it's an OK overview of the architecture, though. > > RFC-1753 is not the best overview, but it has interesting bits. E.g. 2.2 > Packet Format Fields, Option 2: "The packet contains a stack of flow-ids, > with the current one on the top." If this reminds you of MPLS, it should! > (One can think of MPLS as Nimrod's packet-carrying subsystem, in some ways.) > > I guess I should mention that Nimrod covers more stuff - a lot more - than > just path selection. That's because I felt that the architecture embodied in > IPv4 was missing lots of things which one would need to do the internet layer > 'right' in a global-scale Internet (e.g. variable length 'addresses' - for > which we were forced to invent the term 'locator' because many nitwits in the > IETF couldn't wrap their minds around 'addresses' which weren't in every > packet header). And separation of location and identity; and the introduction > of traffic aggregates as first-class objects at the internet layer. Etc, etc, > etc. > > Nimrod's main focus was really on i) providing a path-selection system which > allowed things like letting users have more input to selecting the path their > traffic took (just as when one gets into a car, one gets to pick the path > one's going to use), and ii) controlling the overhead of the routing. > > Of course, on the latter point, in the real world, people just threw > resources (memory, computing power, bandwidth) at the problem. I'm kind of > blown away< that there are almost 1 million routes in the DFZ these days. > Boiling frogs... > > Noel From dfawcus+lists-tuhs at employees.org Tue Jun 28 20:38:43 2022 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Tue, 28 Jun 2022 11:38:43 +0100 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: On Sun, Jun 26, 2022 at 09:57:17AM +1000, Rob Pike wrote: > One of the things we liked about Datakit was that the computer didn't have > to establish the connection before it could reject the call, unlike TCP/IP > where all validation happens after the connection is made. Nor does TCP, one can send a RST to a SYN, and reject the call before it is established. That would then look to the caller just like a non listening endpoint, unless one added data with the RST. So this is really just a consequence of the sockets API, and the current implementations. I've a vague recall of folks suggesting ways to expose that facility via the sockets layer, possibly using setsockopt(), but don't know if anyone ever did it. As I recall that TCP capability was actually exposed via the TLI/XTI API, and (for some STREAMS based TCP stacks) it did function. Although I may be thinking of embedded STREAMS TCP stacks, not unix based stacks. Or by 'connection' are you referring to an end-to-end packet delivery, and that Datakit allowed a closer switch to reject a call before the packet got to the far end? DF From robpike at gmail.com Tue Jun 28 22:36:31 2022 From: robpike at gmail.com (Rob Pike) Date: Tue, 28 Jun 2022 22:36:31 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: I am not a networking expert. I said that already. The issue could well be a property more of sockets than TCP/IP itself, but having the switch do some of the call validation and even maybe authentication (I'm not sure...) sounds like it takes load off the host. -rob On Tue, Jun 28, 2022 at 8:39 PM Derek Fawcus < dfawcus+lists-tuhs at employees.org> wrote: > On Sun, Jun 26, 2022 at 09:57:17AM +1000, Rob Pike wrote: > > One of the things we liked about Datakit was that the computer didn't > have > > to establish the connection before it could reject the call, unlike > TCP/IP > > where all validation happens after the connection is made. > > Nor does TCP, one can send a RST to a SYN, and reject the call before it is > established. That would then look to the caller just like a non listening > endpoint, unless one added data with the RST. > > So this is really just a consequence of the sockets API, and the current > implementations. > I've a vague recall of folks suggesting ways to expose that facility via > the sockets > layer, possibly using setsockopt(), but don't know if anyone ever did it. > > As I recall that TCP capability was actually exposed via the TLI/XTI API, > and (for some STREAMS based TCP stacks) it did function. Although I may be > thinking of embedded STREAMS TCP stacks, not unix based stacks. > > Or by 'connection' are you referring to an end-to-end packet delivery, > and that Datakit allowed a closer switch to reject a call before the packet > got to the far end? > > DF > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robpike at gmail.com Tue Jun 28 22:45:11 2022 From: robpike at gmail.com (Rob Pike) Date: Tue, 28 Jun 2022 22:45:11 +1000 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: One of the reasons I'm not a networking expert may be relevant here. With networks, I never found an abstraction to hang my hat on. Unlike with file systems and files, or even Unix character devices, which provide a level of remove from the underlying blocks and sectors and so on, the Unix networking interface always seemed too low-level and fiddly, analogous to making users write files by managing the blocks and sectors themselves. It could be all sockets' fault, but when I hear networking people talk about the protocols and stacks and routing and load shedding and ....my ears droop. I know it's amazing engineering and all that, but why aren't we allowed to program the I/O without all that fuss? What makes networks so _different_? A telling detail is that the original sockets interface had send and recv, not read and write. From day 1 in Unix land at least, networking was special, and it remains so, but I fail to see why it needs to be. It just seems there has to be a better way. Sockets are just so unpleasant, and the endless nonsense around network configuration doubly so. Rhetorical questions. I'm not asking or wanting an answer. I'm happy to remain a greenhorn, oblivious to the wonder. To adapt a reference some may recognize, I just want to read 5 terabytes. -rob On Tue, Jun 28, 2022 at 10:36 PM Rob Pike wrote: > I am not a networking expert. I said that already. The issue could well be > a property more of sockets than TCP/IP itself, but having the switch do > some of the call validation and even maybe authentication (I'm not sure...) > sounds like it takes load off the host. > > -rob > > > On Tue, Jun 28, 2022 at 8:39 PM Derek Fawcus < > dfawcus+lists-tuhs at employees.org> wrote: > >> On Sun, Jun 26, 2022 at 09:57:17AM +1000, Rob Pike wrote: >> > One of the things we liked about Datakit was that the computer didn't >> have >> > to establish the connection before it could reject the call, unlike >> TCP/IP >> > where all validation happens after the connection is made. >> >> Nor does TCP, one can send a RST to a SYN, and reject the call before it >> is >> established. That would then look to the caller just like a non listening >> endpoint, unless one added data with the RST. >> >> So this is really just a consequence of the sockets API, and the current >> implementations. >> I've a vague recall of folks suggesting ways to expose that facility via >> the sockets >> layer, possibly using setsockopt(), but don't know if anyone ever did it. >> >> As I recall that TCP capability was actually exposed via the TLI/XTI API, >> and (for some STREAMS based TCP stacks) it did function. Although I may be >> thinking of embedded STREAMS TCP stacks, not unix based stacks. >> >> Or by 'connection' are you referring to an end-to-end packet delivery, >> and that Datakit allowed a closer switch to reject a call before the >> packet >> got to the far end? >> >> DF >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdm at cfcl.com Tue Jun 28 22:47:11 2022 From: rdm at cfcl.com (Rich Morin) Date: Tue, 28 Jun 2022 05:47:11 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> > On Jun 28, 2022, at 05:36, Rob Pike wrote: > > I am not a networking expert. I said that already. The issue could well be a property more of sockets than TCP/IP itself, but having the switch do some of the call validation and even maybe authentication (I'm not sure...) sounds like it takes load off the host. Some years ago, we set up a front end email server to reject incoming message attempts that didn't match our list of valid users. This resulted in a better then 90% reduction. -r From marc.donner at gmail.com Tue Jun 28 23:13:34 2022 From: marc.donner at gmail.com (Marc Donner) Date: Tue, 28 Jun 2022 09:13:34 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> Message-ID: In the mid-1980s I returned to IBM Research after finishing up at CMU. I smuggled a bunch of Sun machines in and strung Ethernet between my office and my lab so that the desktop could talk to the server. Then I went around IBM giving talks about TCP/IP and why IBM should commit to it. At the time IBM Research was the center of development of IBM's SNA stuff, so there was some (!) tension. (Particularly because Paul Greene, one of the key leaders of the SNA work, was very close to my undergraduate mentor, so I had socialized with him.) They proposed running TCP/IP encapsulated in SNA, but I told them that the best they could expect was to encapsulate SNA in TCP/IP. That turned out to be what happened. My perception of the debate at the time was that it pitted proprietary networking (SNA, DECNet, ...) against open networking (TCP/IP). The hardware vendors wanted proprietary networking to lock customers into their equipment, but that dog would not hunt. Meanwhile, our community had recently discovered how horrible proprietary tech was for our careers ... the mid-1980s recession led to serious layoffs in the system programmer community and the newly unemployed geeks discovered that the skills so assiduously honed were not portable. Enter FSK and the open source movement. It was pretty clear that except for the clever encapsulation stuff that Vint had done with IP, the TCP/IP world was quick and dirty and quite slapdash. But it was non-proprietary and that is what won the race. What I don't understand is whether Rob's observation about networking is *fundamental* to the space or *incidental* to the implementation. I would love to be educated on that. Marc ===== nygeek.net mindthegapdialogs.com/home On Tue, Jun 28, 2022 at 8:48 AM Rich Morin wrote: > > On Jun 28, 2022, at 05:36, Rob Pike wrote: > > > > I am not a networking expert. I said that already. The issue could well > be a property more of sockets than TCP/IP itself, but having the switch do > some of the call validation and even maybe authentication (I'm not sure...) > sounds like it takes load off the host. > > Some years ago, we set up a front end email server to reject incoming > message attempts that didn't match our list of valid users. This resulted > in a better then 90% reduction. > > -r > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Tue Jun 28 23:33:08 2022 From: crossd at gmail.com (Dan Cross) Date: Tue, 28 Jun 2022 09:33:08 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: On Tue, Jun 28, 2022 at 8:46 AM Rob Pike wrote: > One of the reasons I'm not a networking expert may be relevant here. > With networks, I never found an abstraction to hang my hat on. Hmm, this raises some design questions. > Unlike with file systems and files, or even Unix character > devices, which provide a level of remove from the underlying > blocks and sectors and so on, the Unix networking interface > always seemed too low-level and fiddly, analogous to making > users write files by managing the blocks and sectors themselves. I can see this. Sockets in particular require filling in abstruse and mostly opaque data structures and then passing pointers to them into the kernel. It's not so terribly onerous once one gets it going, particularly for a streaming protocol like TCP, but eronomically it also doesn't sit particularly well in the larger paradigm. Something like plan9's dial() seems more in line with the Unix model than what Unix actually gives us in terms of TLI/Sockets/whatever. I'll note that 10th Edition had its IPC library that handled some of this. > It could be all sockets' fault, but when I hear networking people talk > about the protocols and stacks and routing and load shedding and > ....my ears droop. I know it's amazing engineering and all that, but > why aren't we allowed to program the I/O without all that fuss? > What makes networks so _different_? A telling detail is that the > original sockets interface had send and recv, not read and write. > From day 1 in Unix land at least, networking was special, and it > remains so, but I fail to see why it needs to be. Of course, the semantics of networking are a little different than the (mostly) stream-oriented file model of Unix, in that datagram protocols must be accommodated somehow and they have metadata in the form of sender/receiver information that accompanies each IO request. How does one model that neatly in the read/write case, except by prepending a header or having another argument? But the same is true of streaming to/from files and file-like things as well. I can't `seek` on a pipe or a serial device, for obvious reasons, but that implies that that model is not completely regular. Similarly, writes to, say, a raw disk device that are not a multiple of the sector size have weird semantics. The best we have done is document this and add it to the oral lore. One may argue that the disk device thing is a special case that is so uncomment and only relevant to extremely low-level systems programs that it doesn't count, but the semantics of seeking are universal: programs that want to work as Unix filters have to accommodate this somehow. In practice this doesn't matter much; most filters just don't seek on their input. Once again I am in awe of how Unix got it right for 90% of use cases, and makes the last 10% possible, even if painful. > It just seems there has to be a better way. Sockets are just so > unpleasant, and the endless nonsense around network > configuration doubly so. No argument there. > Rhetorical questions. I'm not asking or wanting an answer. > I'm happy to remain a greenhorn, oblivious to the wonder. As we continue forward, I wonder how much this matters. We talk about sockets, but how many programmers _actually_ reach for that interface when they want to talk over a network? I'd wager that _most_ of the time now days it's hidden behind a library interface that shields the consumer from the gritty details. Sure, that library probably uses sockets internally, but most people probably never look under that rock. > To adapt a reference some may recognize, I just want to read 5 terabytes. Believe it or not, I actually had Borgmon readability. - Dan C. > On Tue, Jun 28, 2022 at 10:36 PM Rob Pike wrote: >> >> I am not a networking expert. I said that already. The issue could well be a property more of sockets than TCP/IP itself, but having the switch do some of the call validation and even maybe authentication (I'm not sure...) sounds like it takes load off the host. >> >> -rob >> >> >> On Tue, Jun 28, 2022 at 8:39 PM Derek Fawcus wrote: >>> >>> On Sun, Jun 26, 2022 at 09:57:17AM +1000, Rob Pike wrote: >>> > One of the things we liked about Datakit was that the computer didn't have >>> > to establish the connection before it could reject the call, unlike TCP/IP >>> > where all validation happens after the connection is made. >>> >>> Nor does TCP, one can send a RST to a SYN, and reject the call before it is >>> established. That would then look to the caller just like a non listening >>> endpoint, unless one added data with the RST. >>> >>> So this is really just a consequence of the sockets API, and the current implementations. >>> I've a vague recall of folks suggesting ways to expose that facility via the sockets >>> layer, possibly using setsockopt(), but don't know if anyone ever did it. >>> >>> As I recall that TCP capability was actually exposed via the TLI/XTI API, >>> and (for some STREAMS based TCP stacks) it did function. Although I may be >>> thinking of embedded STREAMS TCP stacks, not unix based stacks. >>> >>> Or by 'connection' are you referring to an end-to-end packet delivery, >>> and that Datakit allowed a closer switch to reject a call before the packet >>> got to the far end? >>> >>> DF From clemc at ccc.com Wed Jun 29 00:41:27 2022 From: clemc at ccc.com (Clem Cole) Date: Tue, 28 Jun 2022 10:41:27 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> Message-ID: On Tue, Jun 28, 2022 at 9:15 AM Marc Donner wrote: > My perception of the debate at the time was that it pitted proprietary > networking (SNA, DECNet, ...) against open networking (TCP/IP). The > hardware vendors wanted proprietary networking to lock customers into their > equipment, but that dog would not hunt. > Metcalfe's law: "*value of a network is proportional to the square of the number of connected users of the system*." The problem with a walled garden is that it can only grow as large as the walls allow. > > It was pretty clear that except for the clever encapsulation stuff that > Vint had done with IP, the TCP/IP world was quick and dirty and quite > slapdash. But it was non-proprietary and that is what won the race. > Point taken, but I actually think it is more of a Christensen-style disruption where the 'lessor technology' outstrips the more sophisticated one because it finds/creates a new market that values that new technology for what it is and cares less about the ways it may be 'lessor.' I described this in a talk I did at Asilomar a few years back. This is the most important slide: [image: ColesLaw20190222.png] ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ColesLaw20190222.png Type: image/png Size: 252558 bytes Desc: not available URL: From jnc at mercury.lcs.mit.edu Wed Jun 29 01:50:50 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 28 Jun 2022 11:50:50 -0400 (EDT) Subject: [TUHS] Research Datakit notes Message-ID: <20220628155050.2D27618C096@mercury.lcs.mit.edu> > From: Rob Pike > having the switch do some of the call validation and even maybe > authentication (I'm not sure...) sounds like it takes load off the host. I don't have enough information to express a judgement in this particular case, but I can say a few things about how one would go about analyzing questions of 'where should I put function [X]; in the host, or in the 'network' (which almost inevitably means 'in the switches')'. It seems to me that one has to examine three points: - What is the 'cost' to actually _do_ the thing (which might be in transmission usage, or computing power, or memory, or delay), in each alternative; these costs obviously generally cannot be amortized across multiple similar transactions. - What is the 'cost' of providing the _mechanism_ to do the thing, in each alternative. This comes in three parts. The first is the engineering cost of _designing_ the thing, in detail; this obviously is amortized across muiple instances. The second is _producing_ the mechanism, in the places where it is needed (for mechanisms in software, this cost is essentially zero, unless it needs a lot of memory/computes/etc); this is not amortized across many. The third is harder to measure: it's complexity. This is probably a book by itself, but it has costs that are hard to quantify, and are also very disparate: e.g. more complex designs are more likely to have unforseen bugs, which is very different from the 'cost' that more complex designs are probaly harder to evolve for new uses. So far I haven't said anything that isn't applicable across a broad range of information sytems. The last influence on where one puts functions is much more common in communication systems: the Saltzer/Clark/Reed 'End-to-end Arguments in System Design' questions. If one _has_ to put a function in the host to get 'acceptable' performace of that function, the operation/implementation/design cost implications are irrelevant: one has to grit one's teeth and bear them. This may then feed back to design questions in the other areas. E.g. the Version 2 ring at MIT deliberately left out hardware packet checksums - because it was mostly intended for use with TCP/IP traffic, which provided a pseudo-End-to-End checksum, so the per-unit hardware costs didn't buy enough to be worth the costs of a hardware CRC. (Which was the right call; I don't recall the lack of a hardware checksum ever causing a problem.) And then there's the 'techology is a moving target' point: something that might be unacceptably expensive (in computing cost) in year X might be fine in year X+10, when we're lighting our cigars with unneeded computing power. So when one is designing a communication system with a likely lifetime in many decades, one tends to bias one's judgement toward things like End-to-End analysis - because those factors will be forever. Sorry if I haven't offered any answer to your initial query: "having the switch do some of the call validation ... sounds like it takes load off the host", but as I have tried to explain, these 'where should one do [X]' questions are very complicated, and one would need a lot more detail before one could give a good answer. But, in general, "tak[ing] load off the host" doesn't seem to rate highly as a goal these days... :-) :-( Noel From tjteixeira at earthlink.net Wed Jun 29 01:54:41 2022 From: tjteixeira at earthlink.net (Tom Teixeira) Date: Tue, 28 Jun 2022 11:54:41 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> Message-ID: <46b1281e-2c9d-3798-3d52-991070faeb64@earthlink.net> Christensen-style disruption sounds rather like Gresham's Law ("bad money drives out good"), but I don't think the mechanism is the same: one can hoard old silver coins and sell those at a profit for the silver content, but there's no premium I know of for better technology -- probably because "better technology" seems to imply aesthetics, but newer, "lessor technology" is likely to be much faster while using less energy. On 6/28/22 10:41 AM, Clem Cole wrote: > > > On Tue, Jun 28, 2022 at 9:15 AM Marc Donner wrote: > > My perception of the debate at the time was that it pitted > proprietary networking (SNA, DECNet, ...) against open networking > (TCP/IP).  The hardware vendors wanted proprietary networking to > lock customers into their equipment, but that dog would not hunt. > > Metcalfe's law: "/value of a network is proportional to the square of > the number of connected users of the system/."The problem with a > walled garden is that it can only grow as large as the walls allow. > > > It was pretty clear that except for the clever encapsulation stuff > that Vint had done with IP, the TCP/IP world was quick and dirty > and quite slapdash.  But it was non-proprietary and that is what > won the race. > > Point taken, but I actually think it is more of a > Christensen-style disruption where the 'lessor technology' outstrips > the more sophisticated one because it finds/creates a new market that > values that new technology for what it is and cares less about the > ways it may be 'lessor.' > > I described this in a talk I did at Asilomar a few years back.  This > is the most important slide: > ColesLaw20190222.png > > ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ColesLaw20190222.png Type: image/png Size: 252558 bytes Desc: not available URL: From tjteixeira at earthlink.net Wed Jun 29 02:11:10 2022 From: tjteixeira at earthlink.net (Tom Teixeira) Date: Tue, 28 Jun 2022 12:11:10 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: <4d26b9fe-a087-7e9f-9bb1-aa1a85bf39d5@earthlink.net> On 6/28/22 8:45 AM, Rob Pike wrote: > One of the reasons I'm not a networking expert may be relevant here. > With networks, I never found an abstraction to hang my hat on. Unlike > with file systems and files, or even Unix character devices, which > provide a level of remove from the underlying blocks and sectors and > so on, the Unix networking interface always seemed too low-level and > fiddly, analogous to making users write files by managing the blocks > and sectors themselves. It could be all sockets' fault, but when I > hear networking people talk about the protocols and stacks and routing > and load shedding and ....my ears droop. I know it's amazing > engineering and all that, but why aren't we allowed to program the I/O > without all that fuss? What makes networks so _different_? A telling > detail is that the original sockets interface had send and recv, not > read and write. From day 1 in Unix land at least, networking was > special, and it remains so, but I fail to see why it needs to be. Two observations: At the time, I think everyone was searching for the right abstraction. I don't remember the whole talk, but just an anecdote by, I think, David Clark. I don't remember if this was just a seminar at MIT LCS or perhaps at SIGOPS. In any case, David talked about trying to get some database people to use the virtual memory and file systems abstractions that had been built by Multics. They agreed that these were nice abstractions, but in the mean time, "get out of our way and let us at the disk." Since networking developers and users were all searching for the right abstraction, and new hardware, protocols, and software interfaces were being proposed on what seemed like a weekly basis, many of the proposed interfaces tried to expose low level mechanisms as well as high level stream abstractions, preserving the hope that something like TCP could be implemented at user code level rather than the kernel. Secondly, I had to dig up a reference for the Chaosnet software (MIT AI memo 628 available at http://bitsavers.trailing-edge.com/pdf/mit/ai/AIM-628_chaosnet.pdf and probably other places). The Unix implementation used the rest of the path name to specify connection setup parameters in the typical case which seemed more unix-like than sockets. But the Chaosnet software was definitely swept away in the Ethernet/sockets storm surge. From athornton at gmail.com Wed Jun 29 03:05:26 2022 From: athornton at gmail.com (Adam Thornton) Date: Tue, 28 Jun 2022 10:05:26 -0700 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> Message-ID: <76DD5063-3817-4550-980A-66E728DFD634@gmail.com> > On Jun 28, 2022, at 6:13 AM, Marc Donner wrote: > > What I don't understand is whether Rob's observation about networking is *fundamental* to the space or *incidental* to the implementation. I would love to be educated on that. And there it is! THAT was the sentence--well, ahort paragraph--that jogged my memory as to why this seemed familiar. If you go back to _The Unix-Hater's Handbook_ (I know, I know, bear with me), one of the things I noticed and pointed out in my review (https://athornton.dreamwidth.org/14272.html) is how many of the targets of hatred, twenty years down the line, turned out to be unix-adjacent, and not fundamental. In the book, these were things like Usenet and sendmail.cf (indeed, those were the two big ones). But the current discussion: is the thing we don't like Berkeley Sockets? Is it TCP/IP itself? Is the the lack of a Unixy abstraction layer over some lower-level technology? To what degree is it inherent? I mean, obviously, to some degree it's all three, and I think a large but fairly unexamined part of it is that TCP/IP these days almost always at least pretends to be sitting on top of Ethernet at the bottom...but of course Classic Ethernet largely died in the...early 2000s, I guess?...when even extremely cheap home multiple-access-devices became switches rather than hubs. Some sort of inter-machine networking is clearly inherent in a modern concept of Unix. I think we're stuck with the sockets interface and IP, whether we like them or not. They don't bother me a great deal, but, yes, they do not feel as unixy as, say, /dev/tcp does. But the interesting thing is that I think that is Unix-adjacent or, like the UHH distate for Unix filesystems, it's at least incidental and could be replaced if the desire arose. And I think we already have the answer about what the abstraction is, albeit at an application rather than the kernel level. To answer Rob's question: I think the abstraction is now much farther up the stack. To a pretty good first approximation, almost all applications simply definte their own semantics on top of HTTP(S) (OK, OK, Websockets muddy the waters again) and three-to-five verbs. There's an incantation to establish a circuit (or a "session" if you're under the age of 50, I guess), and then you GET, DELETE, and at least one of PUT/POST/PATCH, for "read", "unlink", and "write". This does seem to be a more record-oriented (kids these days get snippy if you call them "records" rather than "objects" but w/e) format than a stream of bytes (or at least you put an abstraction layer in between your records and the stream-of-octets that's happening). This is certainly not efficient at a wire protocol level, but it's a fairly small cognitive burden for people who just want to write applications that communicate with each other. Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnl at johnlabovitz.com Wed Jun 29 03:43:44 2022 From: johnl at johnlabovitz.com (John Labovitz) Date: Tue, 28 Jun 2022 19:43:44 +0200 Subject: [TUHS] Research Datakit notes In-Reply-To: <76DD5063-3817-4550-980A-66E728DFD634@gmail.com> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> <76DD5063-3817-4550-980A-66E728DFD634@gmail.com> Message-ID: I’m generally a lurker here, but this has been an interesting conversation to observe. I often hesitate to post, as to not offend you folks who literally invented this stuff, but I thought it might be helpful to share some experiences with socket-based I/O. For the first time in ~20 years, I’ve recently been writing low-level (eg, not through a library layer/framework) socket code in straight C, for an art project based on an ESP32 embedded/SoC. I first played around with the socket API in the mid-80s, and then wrote a lot of socket-using code in the 1990s for the Watchguard Firebox, the first Linux-based appliance firewall. I have to say that I really enjoying programming with sockets. I feel that it *does* make a lot of sense if I'm thinking directly about the TCP/IP stack, *and* if my code has a good 'impedance match' to the protocols. If I’m writing a server, I’m dealing with connections and queues and various-sized packets/messages/blocks, which have to fit into some decision of memory usage (often true in embedded systems). Usually I’m not simply writing, say, a file server that simply reads from disc and sends bytes out through a stream and then calls close(). I also believe that the sockets API really comes into its own with high-capacity, non-threaded, non-blocking servers or clients — that is, ones that use select() or poll() and then recv() and send() or their variants. I’m sure that if I didn’t have a sockets API and only had open(), read(), write(), etc., I could make it work, but there’s something almost beautiful that happens at a large scale with many non-blocking sockets (see: the reactor pattern) that I don’t think would translate as well with a typical everything-is-a-file model. My opinion solely, of course. But I’m simply happy that both socket- and file-based APIs exist. Each has their purpose. —John > On Jun 28, 2022, at 19:05, Adam Thornton wrote: > > > >> On Jun 28, 2022, at 6:13 AM, Marc Donner wrote: >> >> What I don't understand is whether Rob's observation about networking is *fundamental* to the space or *incidental* to the implementation. I would love to be educated on that. > > And there it is! THAT was the sentence--well, ahort paragraph--that jogged my memory as to why this seemed familiar. > > If you go back to _The Unix-Hater's Handbook_ (I know, I know, bear with me), one of the things I noticed and pointed out in my review (https://athornton.dreamwidth.org/14272.html) is how many of the targets of hatred, twenty years down the line, turned out to be unix-adjacent, and not fundamental. > > In the book, these were things like Usenet and sendmail.cf (indeed, those were the two big ones). > > But the current discussion: is the thing we don't like Berkeley Sockets? Is it TCP/IP itself? Is the the lack of a Unixy abstraction layer over some lower-level technology? To what degree is it inherent? > > I mean, obviously, to some degree it's all three, and I think a large but fairly unexamined part of it is that TCP/IP these days almost always at least pretends to be sitting on top of Ethernet at the bottom...but of course Classic Ethernet largely died in the...early 2000s, I guess?...when even extremely cheap home multiple-access-devices became switches rather than hubs. > > Some sort of inter-machine networking is clearly inherent in a modern concept of Unix. I think we're stuck with the sockets interface and IP, whether we like them or not. They don't bother me a great deal, but, yes, they do not feel as unixy as, say, /dev/tcp does. But the interesting thing is that I think that is Unix-adjacent or, like the UHH distate for Unix filesystems, it's at least incidental and could be replaced if the desire arose. And I think we already have the answer about what the abstraction is, albeit at an application rather than the kernel level. > > To answer Rob's question: I think the abstraction is now much farther up the stack. To a pretty good first approximation, almost all applications simply definte their own semantics on top of HTTP(S) (OK, OK, Websockets muddy the waters again) and three-to-five verbs. There's an incantation to establish a circuit (or a "session" if you're under the age of 50, I guess), and then you GET, DELETE, and at least one of PUT/POST/PATCH, for "read", "unlink", and "write". This does seem to be a more record-oriented (kids these days get snippy if you call them "records" rather than "objects" but w/e) format than a stream of bytes (or at least you put an abstraction layer in between your records and the stream-of-octets that's happening). > > This is certainly not efficient at a wire protocol level, but it's a fairly small cognitive burden for people who just want to write applications that communicate with each other. > > Adam From john at jfloren.net Wed Jun 29 04:28:54 2022 From: john at jfloren.net (John Floren) Date: Tue, 28 Jun 2022 18:28:54 +0000 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: On 6/28/22 05:45, Rob Pike wrote: > [...] I know it's amazing engineering and > all that, but why aren't we allowed to program the I/O without all that > fuss? What makes networks so _different_? A telling detail is that the > original sockets interface had send and recv, not read and write. From > day 1 in Unix land at least, networking was special, and it remains so, > but I fail to see why it needs to be. > > It just seems there has to be a better way. Sockets are just so > unpleasant, and the endless nonsense around network configuration doubly so. > I was pretty sad when netchans were discontinued. A colleague was sufficiently attached to them that he kept his own branch of the library going for a while. john From stewart at serissa.com Wed Jun 29 07:19:27 2022 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 28 Jun 2022 17:19:27 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> Message-ID: <07115053-45A0-4758-9B96-0B631D8A5B07@serissa.com> On 2022, Jun 28, at 9:33 AM, Dan Cross wrote: > > On Tue, Jun 28, 2022 at 8:46 AM Rob Pike wrote: >> One of the reasons I'm not a networking expert may be relevant here. >> With networks, I never found an abstraction to hang my hat on. > My theory is that networking is different because it breaks all the time. One can get away with writing local file system programs without checking the return codes on every single call and figuring out what you should do in the case of every possible error. It is much harder to get away with that sort of thing when writing network applications. And of course there is a long history of things that look like file systems but which have network failure modes, and treating them like they are reliable often does not end well. Full up reliable applications with replicated storage and multiple network availability zones and raft/paxos/byzantine generals level coding are pretty arcane. I know I am not qualified to write one. File systems are “good enough” that you can depend on them, mostly. Networks are not. -L From stewart at serissa.com Wed Jun 29 07:32:03 2022 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 28 Jun 2022 17:32:03 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: <20220628155050.2D27618C096@mercury.lcs.mit.edu> References: <20220628155050.2D27618C096@mercury.lcs.mit.edu> Message-ID: I’ll argue there is quite a lot known about where to put network functionality, much of it from HPC. If you want minimum latency and minimum variance of latency, both of which are important to big applications, you make the network reliable and move functionality into the host adapters and the switches. The code path at each end of a very good MPI implementation will be under 200 machine instructions, all in user mode. There is no time to do retry or variable code paths. Doesn’t work on WANs of course, or at consumer price points. (I think there is still a lot to do, because the best networks still hover around 800 nanoseconds calling SEND to returning from RECV, and I think it could be 100). -L > On 2022, Jun 28, at 11:50 AM, Noel Chiappa wrote: > >> From: Rob Pike > >> having the switch do some of the call validation and even maybe >> authentication (I'm not sure...) sounds like it takes load off the host. > > I don't have enough information to express a judgement in this particular > case, but I can say a few things about how one would go about analyzing > questions of 'where should I put function [X]; in the host, or in the > 'network' (which almost inevitably means 'in the switches')'. > From rich.salz at gmail.com Wed Jun 29 07:34:53 2022 From: rich.salz at gmail.com (Richard Salz) Date: Tue, 28 Jun 2022 17:34:53 -0400 Subject: [TUHS] Research Datakit notes In-Reply-To: <07115053-45A0-4758-9B96-0B631D8A5B07@serissa.com> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <07115053-45A0-4758-9B96-0B631D8A5B07@serissa.com> Message-ID: > My theory is that networking is different because it breaks all the time. This Sun research paper from 1994 https://scholar.harvard.edu/waldo/publications/note-distributed-computing is a classic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfawcus+lists-tuhs at employees.org Wed Jun 29 08:45:18 2022 From: dfawcus+lists-tuhs at employees.org (Derek Fawcus) Date: Tue, 28 Jun 2022 23:45:18 +0100 Subject: [TUHS] HTTP (was Re: Re: Research Datakit notes) In-Reply-To: <76DD5063-3817-4550-980A-66E728DFD634@gmail.com> References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <0CA3B3AA-6491-47A5-843D-CDF2F3A74659@cfcl.com> <76DD5063-3817-4550-980A-66E728DFD634@gmail.com> Message-ID: On Tue, Jun 28, 2022 at 10:05:26AM -0700, Adam Thornton wrote: > > And I think we already have the answer about what the abstraction is, albeit at an application rather than the kernel level. > > To answer Rob's question: I think the abstraction is now much farther up the stack. To a pretty good first approximation, almost all applications simply definte their own semantics on top of HTTP(S) (OK, OK, Websockets muddy the waters again) and three-to-five verbs. There's an incantation to establish a circuit (or a "session" if you're under the age of 50, I guess), and then you GET, DELETE, and at least one of PUT/POST/PATCH, for "read", "unlink", and "write". This does seem to be a more record-oriented (kids these days get snippy if you call them "records" rather than "objects" but w/e) format than a stream of bytes (or at least you put an abstraction layer in between your records and the stream-of-octets that's happening). Yes it is effectively records, or RPCs w/o state between requests. So akin to DNS queries over UDP, rather than DNS queries over TCP. As the whole HTTP model allows for proxies, at which point the request (GET, HEAD, POST, etc) includes an endpoint address (DNS name and port) and path at that endpoint. Having 'session' state is built at a level above HTTP, by cookies or redirecting to magic URIs encoding the session ID after the query (the '?'). The fact that this all happens to run over a TCP connection, simply means that we have stacked sets of VCs. An application VC on top of a connectionless HTTP on top of a TCP 'VC'. One could argue that Websockets simplifies this by ripping out the HTTP layer, and having the top VC then just being message framing within the TCP session. (Ignoring for the moment TLS on top of TCP, or HTTP/3 being on top of QUIC, hence UDP) > This is certainly not efficient at a wire protocol level, but it's a fairly small cognitive burden for people who just want to write applications that communicate with each other. Sort of, in that I recently investigated the Go HTTP client/server APIs as someone was asking us about implementing a MiTM HTTP(S) "firewall". Depending upon how one deals with it, it hides the VCs and exposes the connectionless RPCs. With such a MiTM in place, the 'nice' RPC in a session is effectivly forced back to the 'nasty' RPC in datagrams, even though the endpoints may be unaware, but they already include the whole datagram remote address in the request. Responses come back by 'magic' without explicit address. So maybe from that perspective one could model HTTP request/response as RPCs over SVCs, those being raised and torn down for each exchange. DF From stu at remphrey.net Wed Jun 29 16:07:11 2022 From: stu at remphrey.net (Stuart Remphrey) Date: Wed, 29 Jun 2022 14:07:11 +0800 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> <20220625230939.GG19404@mcvoy.com> <07115053-45A0-4758-9B96-0B631D8A5B07@serissa.com> Message-ID: Yes, I'd thought of Sun's Jini here, part of their Java push -- which IIRC exposed network (un)reliability and tried to address it through service lookup and limited-lifetime lease/renew for remote resource access. Though I'm not sure it went anywhere? (or how TUHS-relevant this is/was... apologies if not) ----- Hmm, apparently Jini became Apache River; last release in 2016, then retired Feb 2022: https://en.wikipedia.org/wiki/Jini On Wed, 29 Jun 2022, 05:36 Richard Salz, wrote: > > My theory is that networking is different because it breaks all the time. > > This Sun research paper from 1994 > https://scholar.harvard.edu/waldo/publications/note-distributed-computing > is a classic. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Thu Jun 30 06:21:13 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Wed, 29 Jun 2022 22:21:13 +0200 Subject: [TUHS] Research Datakit notes In-Reply-To: References: <2803DC51-6CBC-4257-B40C-8A559C27CAE3@planet.nl> Message-ID: > Would you happen to know where I can find copies of these three > papers? > > A. G. Fraser, "Datakit - A Modular Network for Synchronous and > Asynchronous Traffic", Proc. ICC 79, June 1979, Boston, Ma., > pp.20.1.1-20.1.3 > > G. L. Chesson, "Datakit Software Architecture", Proc. ICC 79, June > 1979, Boston Ma., pp.20.2.1-20.2.5 > > G. L. Chesson and A. G. Fraser, "Datakit Network Architecture," Proc. > Compcon 80, February 1980, San Fransisco CA., pp.59-61 I just remembered that I had received a copy of a file note (50+ pages) that Greg Chesson wrote in 1982 about the "CMC” control software for Datakit. I think it covers the same ground as the 1979 paper, but in far greater detail and with two more years of development. In short, the connection protocol in CMC is based on the exchange of binary messages. That was replaced (for the most part) by text-based messages in the later TDK control software. It is here (it is a 16MB pdf): https://www.jslite.net/notes/dk3.pdf To compare, here are the first two design documents on sockets. I looked for these for many years (even had the Berkeley library manually search the boxes with CSRG documents that Kirk McKusick had sent there - to no avail), and then in 2021 Rich Morin found them in the papers of Jim Joyce. I’m still very thankful for this. These two papers were written in the summer of 1981 and circulated to the newly formed steering committee for what was to become 4.2BSD (note: ~5MB pdf each). The first is specifically on networking: https://www.jslite.net/notes/joy1.pdf The second outlines the overall ambitions for the new version (including a summary of the above document). It has an interesting view of John Reiser’s VM code in its section 3.17 as well: https://www.jslite.net/notes/joy2.pdf What was proposed is not quite the sockets we know, but the general direction is set and the reasoning is explained. Reading the Chesson and Joy paper side by side makes for an interesting comparison of thinking on these topics in the early 80’s. Maybe they are worth storing in the TUHS archive. Wbr, Paul From sjenkin at canb.auug.org.au Thu Jun 30 23:14:20 2022 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Thu, 30 Jun 2022 23:14:20 +1000 Subject: [TUHS] "9 skills our grandkids won't have" - Is this a TUHS topic? Message-ID: <180245D1-0DCD-4C2C-A26A-EF68578FD548@canb.auug.org.au> What are the 1970’s & 1980’s Computing / IT skills “our grandkids won’t have”? Whistling into a telephone while the modem is attached, because your keyboard has a stuck key - something I absolutely don’t miss. Having a computer in a grimy wharehouse with 400 days of uptime & wondering how a reboot might go? steve j ========= 9 Skills Our Grandkids Will Never Have 1: Using record players, audio cassettes, and VCRs 2: Using analog phones [ or an Analog Clock ] 3. Writing letters by hand and mailing them 4. Reading and writing in cursive 5. Using manual research methods [ this is a Genealogy site ] 6. Preparing food the old-fashioned way 7. Creating and mending clothing 8. Building furniture from scratch 9. Speaking the languages of their ancestors -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From marc.donner at gmail.com Thu Jun 30 23:39:15 2022 From: marc.donner at gmail.com (Marc Donner) Date: Thu, 30 Jun 2022 09:39:15 -0400 Subject: [TUHS] "9 skills our grandkids won't have" - Is this a TUHS topic? In-Reply-To: <180245D1-0DCD-4C2C-A26A-EF68578FD548@canb.auug.org.au> References: <180245D1-0DCD-4C2C-A26A-EF68578FD548@canb.auug.org.au> Message-ID: Programming an 026 skip card. Inserting the skip card. Using ed in kernel safe mode to fix a broken config file. Threading a half-inch tape in a tape drive. Remembering to insert or remove the write ring. Cleaning floppy disk heads. Manually keying a boot program into an SDS-930. ===== nygeek.net mindthegapdialogs.com/home On Thu, Jun 30, 2022 at 9:14 AM steve jenkin wrote: > What are the 1970’s & 1980’s Computing / IT skills “our grandkids won’t > have”? > > Whistling into a telephone while the modem is attached, because your > keyboard has a stuck key > - something I absolutely don’t miss. > > Having a computer in a grimy wharehouse with 400 days of uptime & > wondering how a reboot might go? > > steve j > > ========= > > 9 Skills Our Grandkids Will Never Have > < > https://blog.myheritage.com/2022/06/9-skills-our-grandkids-will-never-have/ > > > > 1: Using record players, audio cassettes, and VCRs > 2: Using analog phones > [ or an Analog Clock ] > 3. Writing letters by hand and mailing them > 4. Reading and writing in cursive > 5. Using manual research methods [ > this is a Genealogy site ] > 6. Preparing food the old-fashioned way > 7. Creating and mending clothing > 8. Building furniture from scratch > 9. Speaking the languages of their ancestors > > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjteixeira at earthlink.net Thu Jun 30 23:54:00 2022 From: tjteixeira at earthlink.net (Tom Teixeira) Date: Thu, 30 Jun 2022 09:54:00 -0400 Subject: [TUHS] "9 skills our grandkids won't have" - Is this a TUHS topic? In-Reply-To: References: <180245D1-0DCD-4C2C-A26A-EF68578FD548@canb.auug.org.au> Message-ID: <3c213543-5c4c-1f09-6454-a3c6243d731b@earthlink.net> Taking floppy disks out of the holders and floating them on the cold air updrafts in computer room false floor. On 6/30/22 9:39 AM, Marc Donner wrote: > Programming an 026 skip card.  Inserting the skip card. > Using ed in kernel safe mode to fix a broken config file. > Threading a half-inch tape in a tape drive.  Remembering to insert or > remove the write ring. > Cleaning floppy disk heads. > Manually keying a boot program into an SDS-930. > ===== > nygeek.net > mindthegapdialogs.com/home > > > On Thu, Jun 30, 2022 at 9:14 AM steve jenkin > wrote: > > What are the 1970’s & 1980’s Computing / IT skills “our grandkids > won’t have”? > > Whistling into a telephone while the modem is attached, because > your keyboard has a stuck key >          - something I absolutely don’t miss. > > Having a computer in a grimy wharehouse with 400 days of uptime & > wondering how a reboot might go? > > steve j > > ========= > > 9 Skills Our Grandkids Will Never Have >         > > >         1: Using record players, audio cassettes, and VCRs >         2: Using analog phones                   [ or an Analog > Clock ] >         3. Writing letters by hand and mailing them >         4. Reading and writing in cursive >         5. Using manual research methods           [ this is a > Genealogy site ] >         6. Preparing food the old-fashioned way >         7. Creating and mending clothing >         8. Building furniture from scratch >         9. Speaking the languages of their ancestors > > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > -------------- next part -------------- An HTML attachment was scrubbed... URL: