From tuhs at tuhs.org Sat Nov 1 02:04:27 2025 From: tuhs at tuhs.org (segaloco via TUHS) Date: Fri, 31 Oct 2025 16:04:27 +0000 Subject: [TUHS] On Graduation from (VI) to (I) In-Reply-To: <202510310833.59V8XCal099338@freefriends.org> References: <202510310833.59V8XCal099338@freefriends.org> Message-ID: On Friday, October 31st, 2025 at 01:33, Arnold Robbins via TUHS wrote: > segaloco via TUHS tuhs at tuhs.org wrote: > > > Was it less of a big deal than my dramatic delivery would suggest? > > > Probably. But I wasn't there. Doug is probably the best person to > answer. > > Arnold Well and if it helps, I'd be equally interested in tales from other shops with their own onus to fill their respective UNIX manual sections. Just seemed like a fun topic that could inspire some interesting little stories. - Matt G. From tuhs at tuhs.org Sun Nov 2 00:42:14 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 10:42:14 -0400 Subject: [TUHS] 3 essays on the ujnix legacy Message-ID: I wanted to share three brief essays I wrote on LinkedIn. I hope it's appropriate to share them here. The Cathedral, the Bazaar, and the Marketplace People sometimes say Linux suffers from a “Not Invented Here” problem. They pnt to technologies like DTrace and ZFS — born in Solaris, admired by Linux, but never fully adopted. FreeBSD, macOS, even Windows embraced DTrace. Linux went its own way, creating eBPF and Btrfs instead. At first glance, it looks like stubbornness. But look closer, and you see a deeper truth about how different systems and their communities evolve. The Cathedral: Solaris and the Dream of Perfection Solaris and the BSDs were cathedral projects: elegant, coherent, and built to last. Their philosophy was architectural: design it perfectly once, and maintain it forever. That stability made masterpieces like ZFS and DTrace possible. Cathedrals are slow to change. They preserve beauty, not momentum. The Bazaar: Linux and the Art of Reinvention Linux took the opposite path. Its ecosystem is messy, distributed, and loud, a bazaar where competing ideas coexist until one wins by survival, not decree. It doesn’t import technologies wholesale. It reinvents them from first principles. That’s why instead of adopting DTrace, Linux built eBPF, a programmable virtual machine for tracing, networking, and observability. It’s more complex, less elegant, but more adaptable. This isn’t “Not Invented Here.” It’s “Invented Anew, Every Time.” It’s a culture that prizes autonomy over elegance, vitality over symmetry. The Marketplace: Microsoft and the Pragmatism of Scale Then there’s Microsoft, once the cathedral’s rival, now a marketplace of its own. Its genius has never been invention but absorption: taking good ideas from elsewhere and integrating them into something cohesive. PowerShell drew from Unix shells and .NET reflection. NTFS inherited DNA from VMS. Today, Microsoft ships Linux inside Windows, Edge on Chromium, and hosts GitHub — the beating heart of open source. Yet the uptake of Microsoft’s own open-source tools among Linux users has been modest. You can install PowerShell on Ubuntu or .NET on Debian, but few do. Not because the tools are bad, but because open source isn’t just a license, it’s a language. Microsoft’s tools still speak in the idioms of Windows. They solve problems that feel foreign in Unix hands. You can open-source the code, but you can’t open-source the culture overnight. Three Philosophies, One Ecosystem - Solaris / BSD: Design it perfectly, then preserve it. - Linux: Rebuild it constantly, and let it evolve. - Microsoft: Adopt it broadly, and make it familiar. Each model has its genius, and its limits. Solaris gave us clarity but stagnated. Linux gave us chaos but endurance. Microsoft gave us cohesion, and at times, a touch of the blasé. But in 2025, the walls between them have thinned. Linux runs in Azure. eBPF runs on Windows. Solaris’s spirit lives on in every file system that promises self-healing storage. The world has evolved since ESR first told us about the Cathedral and the Bazaar. Before the Bazaar The lineage of open collaboration from Bell Labs to the AI Lab to Linux. In the early days, Unix was a community. Not a corporate product or a stealth research project, but a loose fellowship of programmers trading code through tape reels. Thompson and Ritchie gave it away for a nominal fee, and universities adopted it because it was small, elegant, and instructional. The code moved by post and by modem; ideas moved even faster. Every new utility carried someone’s initials in the source comments, a quiet signature of a gift freely given. Meanwhile, at MIT, another tribe of hackers lived by a similar rhythm, though they worked on different machines. Their home was the AI Lab, and their world ran on PDP-10s under ITS, a timesharing system they had shaped to their own image. It was a place where curiosity outranked hierarchy, where anyone could read or patch any program, and where a clever hack was its own kind of currency. For a while, both cultures thrived on openness. But while the Unix world diffused into hundreds of campuses and companies, the AI Lab was a more fragile ecosystem. When the hardware aged and the market closed in, that ecosystem broke. What had been an everyday freedom, editing each other’s code, suddenly became trespass. The lab was dying, and everyone could feel it. The old PDP-10s hummed like relics, and the laughter that used to spill from the terminals had thinned to the occasional keystroke. A printer driver had been locked away behind a corporate contract; a colleague left for a company that paid him to keep quiet. The code that once bound the room together was vanishing into sealed disks and nondisclosure. Richard Matthew Stallman stood in the middle of that silence and made a decision that would outlast the machines. If the lab was gone, he would build another. One not bound by walls or employers. One where the source itself would be the meeting place. For all the talk of freedom you may have heard from rms, GNU wasn’t born from utopia; it was born from grief. It was his hope that a community could be written back into existence, line by line. Before him, long before the Internet turned collaboration into a torrent, there was a room at Bell Labs Center 1127. That was the first bazaar: quiet, fluorescent, lined with a PDP-11 and teletype terminals. The people who used Unix were the same ones who built it. When Ken Thompson or Dennis Ritchie wrote a tool, it wasn’t for customers; it was for the colleague one door down. Brian Kernighan would stop by with an idea for a text filter. Joe Ossanna needed better document formatting. Doug McIlroy wanted to teach the machines how to speak in little, composable verbs. By lunchtime, half the lab was using the new tool, and by evening, someone else had improved it. The same impulse stretches now across continents instead of offices. The bazaar simply scaled up the Unix room: at Bell Labs, at the AI Lab at MIT, and now in your every git pull. The Rhetoric of the Bazaar How Eric S. Raymond sold open source as a process improvement. When The Cathedral and the Bazaar appeared in the late 1990s, it read like field notes from a new frontier. Raymond seemed to be explaining why Linux’s sprawling, volunteer army had outpaced corporate software. But the essay was more than observation. It was persuasion dressed as ethnography, a cultural revolution disguised as engineering advice. The narrow slogan “Given enough eyeballs, all bugs are shallow.” The line became gospel. Short, clever, and apparently scientific, it reduced open collaboration to a form of distributed debugging. Many eyes, fewer bugs. Collaboration, in this light, was not a creative act but a safety net. It was a perfect slogan for the audience he needed. Managers could measure defects. Executives could chart release velocity. You could sell that to a boardroom. “Open source fixes bugs faster” sounds like efficiency; “open source changes how humans organize” sounds like insurrection. The trade So Raymond made a trade. He gave up the movement’s breadth for credibility. The grand claim, that transparency breeds better design and deeper understanding, became a smaller, safer one about quality assurance. And in doing so, he made the revolution sound replicable. It worked. Netscape opened its code. “Open source” replaced “free software.” Corporations joined the bazaar without ever entering the community. They adopted the method, not the meaning. The diary as proof Even the long detour through fetchmail fits the pattern. It reads like autobiography, but it’s really evidence: if this model works for me, it can work for you. The diary is a case study, not merely an exposition. Raymond wasn’t just documenting open development. He was demonstrating it. The legacy The quiet compromise of the essay is that by focusing on bugs instead of ideology, Raymond made the unfamiliar familiar. He turned rebellion into best practice. And in doing so, he helped open source escape the lab and enter the market—but also stripped it of its soul. It split the movement. The free software camp clung to ethics; the open source camp to efficiency. Each accused the other of missing the point. Perhaps both did. The real power of the bazaar wasn’t in its license or its process. It was in the way it made people feel seen, the way a thousand strangers could build something together and call it theirs. That’s what made the terminals hum and the mailing lists sing. The real birth wasn’t a method or a manifesto. It was a new community, just another example of what happens when people share a common need and work together to make it happen. And it didn’t belong to either banner. It belonged to everyone who showed up. From tuhs at tuhs.org Sun Nov 2 00:59:21 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 10:59:21 -0400 Subject: [TUHS] evolution of the cli Message-ID: i'm a bit reluctant to post this here lest you rip it apart, but i'm guess i'm ok with that if it happens. i'm more interested in learning the truth than i am in being right. The Evolution of the Command Line: From Terseness to Expression 1. The Classical Unix Model (1970s–80s) cmd -flags arguments The early Unix commands embodied the ideal of “do one thing well.” Each flag was terse and mnemonic (-l, -r), and each utility was atomic. The shell provided composition through pipes. Commands like grep, cut, and sort combined to perform a series of operations on the same data stream. 2. The GNU Era (late 80s–90s) cmd --long-simple --long-key=value [arguments] The GNU Project introduced long options to help people remember what the terse flags meant. Common options like --help and --version made tools self-describing. Strengths: clarity, accessibility, scriptability Weaknesses: creeping featurism 3. The “Swiss Army Knife” Model (1990s–2000s) Next was consolidation. Developers shipped a single binary with multiple subcommands: command subcommand [options] [arguments] Example: openssl x509 -in cert.pem -noout -text Each subcommand occupied its own domain, effectively creating namespaces. This structure defined tools like git, svn, and openssl. Strengths: unified packaging, logical grouping Weaknesses: internal inconsistency; subcommands evolved unevenly. 4. The Verb-Oriented CLI (2000s–Present) As CLIs matured, their design grew more linguistic. Tools like Docker, Git, and Kubernetes introduced verb-oriented hierarchies: tool verb [object] [flags] Example: docker run -it ubuntu bash This mapped naturally to the mental model of performing actions: “Docker, run this container.” “Git, commit this change.” Frameworks like Go’s Cobra or Python’s Click standardized the pattern. Strengths: extensible, discoverable, self-documenting Weaknesses: verbosity and conceptual overhead. A CLI became an ecosystem. 5. The Sententious Model When a domain grows too complex for neat hierarchies, a single command becomes a compact expression of a workflow. Consider zfs, an elegant example of declarative-imperative blending: zfs create -o compression=lz4 tank/data It reads almost like a sentence: “Create a new dataset called tank/data with compression enabled using LZ4.” Each option plays a grammatical role: create — the verb -o compression=lz4 — a property or adverbial modifier tank/data — the object being acted upon One fluent expression defines what and how. The syntax is a kind of expressive and efficient shell-native DSL. This phase of CLI design is baroque: not minimalist, not verbose, but literary in its compression of meaning. 6. The Configuration-Driven CLI (Modern Era) Example: kubectl apply -f deployment.yaml Today’s tools often speak in declarative terms. Rather than specify every step, you provide a desired state in a file, and the CLI enacts it. Strengths: scales elegantly in automation, integrates with APIs Weaknesses: less immediacy; the human feedback loop grows distant. Across half a century of design, the command line has evolved from terse incantations to expressive languages of intent. From tuhs at tuhs.org Sun Nov 2 01:45:58 2025 From: tuhs at tuhs.org (Larry McVoy via TUHS) Date: Sat, 1 Nov 2025 08:45:58 -0700 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: Message-ID: <20251101154558.GH22772@mcvoy.com> These are definitely rose colored glasses versions of history. I'd argue that SunOS was far closer to a perfect kernel than Solaris. Solaris wanted to have that title but never did. It tried but the soul of the OS sort of left when they switched to System 5. As for ZFS, I'm the guy that hired Bonwick away from Stanford to come to Sun and I'm personal friends with Bill Moore. ZFS is a giant disapointment to me. Why? It undid _years_ of work to have a reasonable architecture for the page cache. In BSD, before mmap() came along, there was just the buffer cache, blocks, inodes, directories were all in the buffer cache. mmap() introduced the concept of a page cache, similar to blocks but not quite the same. Blocks were read/write and pages were mmap(). This model sucked because you could have two copies of a page, one in the page cache and one in the buffer cache. Creates the classic cache coherency problem, I can stuff in an mmap()ed page and someone else can change the buffer cache copy of the page at the same time. Sun didn't like that model so they did a shit ton of work to redo the VM system and the I/O system so that there was just one copy of the truth: the page cache. Read/write/mmap all worked on the same pages. The buffer cache stuck around only for inodes and directories. ZFS kind of messed with the model because they allowed for compressed files. All of the rest of the system knew the size of page. For example, in the block list in the inode, it's assumed that each entry (other than a possible frag at the end which is handled by ino->length) is block sized. Compression kills that idea, you need more information. ZFS decided that it was too hard to have compression on top of a page cache and put back their own buffer cache. Unforgivable in my mind after SunOS did such a nice job of unifying everything around the page cache, ZFS reintroduced a known coherency problem. I'd give ZFS credit for a lot of other stuff, but I would have flunked it in the OS course I was teaching at Stanford. I get to say that because in BitKeeper we implemented the compressed (and CRCed and XORed) layer underneath mmap(). We proved that it could be done, you needed 2 ints per block rather than 1, but there is a body of code that shows it can be done and it works for read/write/mmap (not on Windows, that was a shit show). So when I talked to Bill about it and he said it was too hard I was hugely disappointed. These days, there doesn't seem to be anyone who cares enough about clean OS architecture and wants to fix it. On Sat, Nov 01, 2025 at 10:42:14AM -0400, A. P. Garcia via TUHS wrote: > I wanted to share three brief essays I wrote on LinkedIn. I hope it's > appropriate to share them here. > > The Cathedral, the Bazaar, and the Marketplace > > People sometimes say Linux suffers from a ???Not Invented Here??? problem. They > pnt to technologies like DTrace and ZFS ??? born in Solaris, admired by > Linux, but never fully adopted. FreeBSD, macOS, even Windows embraced > DTrace. Linux went its own way, creating eBPF and Btrfs instead. > > At first glance, it looks like stubbornness. But look closer, and you see a > deeper truth about how different systems and their communities evolve. > > The Cathedral: Solaris and the Dream of Perfection > > Solaris and the BSDs were cathedral projects: elegant, coherent, and built > to last. Their philosophy was architectural: design it perfectly once, and > maintain it forever. That stability made masterpieces like ZFS and DTrace > possible. > > Cathedrals are slow to change. They preserve beauty, not momentum. > > The Bazaar: Linux and the Art of Reinvention > > Linux took the opposite path. Its ecosystem is messy, distributed, and > loud, a bazaar where competing ideas coexist until one wins by survival, > not decree. It doesn???t import technologies wholesale. It reinvents them > from first principles. > > That???s why instead of adopting DTrace, Linux built eBPF, a programmable > virtual machine for tracing, networking, and observability. It???s more > complex, less elegant, but more adaptable. > > This isn???t ???Not Invented Here.??? It???s ???Invented Anew, Every Time.??? It???s a > culture that prizes autonomy over elegance, vitality over symmetry. > > The Marketplace: Microsoft and the Pragmatism of Scale > > Then there???s Microsoft, once the cathedral???s rival, now a marketplace of > its own. Its genius has never been invention but absorption: taking good > ideas from elsewhere and integrating them into something cohesive. > > PowerShell drew from Unix shells and .NET reflection. NTFS inherited DNA > from VMS. Today, Microsoft ships Linux inside Windows, Edge on Chromium, > and hosts GitHub ??? the beating heart of open source. > > Yet the uptake of Microsoft???s own open-source tools among Linux users has > been modest. You can install PowerShell on Ubuntu or .NET on Debian, but > few do. Not because the tools are bad, but because open source isn???t just a > license, it???s a language. > > Microsoft???s tools still speak in the idioms of Windows. They solve problems > that feel foreign in Unix hands. You can open-source the code, but you > can???t open-source the culture overnight. > > Three Philosophies, One Ecosystem > > - Solaris / BSD: Design it perfectly, then preserve it. > - Linux: Rebuild it constantly, and let it evolve. > - Microsoft: Adopt it broadly, and make it familiar. > > Each model has its genius, and its limits. Solaris gave us clarity but > stagnated. Linux gave us chaos but endurance. Microsoft gave us cohesion, > and at times, a touch of the blas??. > > But in 2025, the walls between them have thinned. Linux runs in Azure. eBPF > runs on Windows. Solaris???s spirit lives on in every file system that > promises self-healing storage. > > The world has evolved since ESR first told us about the Cathedral and the > Bazaar. > > > > Before the Bazaar > The lineage of open collaboration from Bell Labs to the AI Lab to Linux. > > In the early days, Unix was a community. Not a corporate product or a > stealth research project, but a loose fellowship of programmers trading > code through tape reels. Thompson and Ritchie gave it away for a nominal > fee, and universities adopted it because it was small, elegant, and > instructional. The code moved by post and by modem; ideas moved even > faster. Every new utility carried someone???s initials in the source > comments, a quiet signature of a gift freely given. > > Meanwhile, at MIT, another tribe of hackers lived by a similar rhythm, > though they worked on different machines. Their home was the AI Lab, and > their world ran on PDP-10s under ITS, a timesharing system they had shaped > to their own image. It was a place where curiosity outranked hierarchy, > where anyone could read or patch any program, and where a clever hack was > its own kind of currency. > > For a while, both cultures thrived on openness. But while the Unix world > diffused into hundreds of campuses and companies, the AI Lab was a more > fragile ecosystem. When the hardware aged and the market closed in, that > ecosystem broke. What had been an everyday freedom, editing each other???s > code, suddenly became trespass. > > The lab was dying, and everyone could feel it. The old PDP-10s hummed like > relics, and the laughter that used to spill from the terminals had thinned > to the occasional keystroke. A printer driver had been locked away behind a > corporate contract; a colleague left for a company that paid him to keep > quiet. The code that once bound the room together was vanishing into sealed > disks and nondisclosure. > > Richard Matthew Stallman stood in the middle of that silence and made a > decision that would outlast the machines. If the lab was gone, he would > build another. One not bound by walls or employers. One where the source > itself would be the meeting place. > > For all the talk of freedom you may have heard from rms, GNU wasn???t born > from utopia; it was born from grief. It was his hope that a community could > be written back into existence, line by line. > > Before him, long before the Internet turned collaboration into a torrent, > there was a room at Bell Labs Center 1127. That was the first bazaar: > quiet, fluorescent, lined with a PDP-11 and teletype terminals. The people > who used Unix were the same ones who built it. > > When Ken Thompson or Dennis Ritchie wrote a tool, it wasn???t for customers; > it was for the colleague one door down. Brian Kernighan would stop by with > an idea for a text filter. Joe Ossanna needed better document formatting. > Doug McIlroy wanted to teach the machines how to speak in little, > composable verbs. By lunchtime, half the lab was using the new tool, and by > evening, someone else had improved it. > > The same impulse stretches now across continents instead of offices. The > bazaar simply scaled up the Unix room: at Bell Labs, at the AI Lab at MIT, > and now in your every git pull. > > > > The Rhetoric of the Bazaar > > How Eric S. Raymond sold open source as a process improvement. > > When The Cathedral and the Bazaar appeared in the late 1990s, it read like > field notes from a new frontier. Raymond seemed to be explaining why > Linux???s sprawling, volunteer army had outpaced corporate software. But the > essay was more than observation. It was persuasion dressed as ethnography, > a cultural revolution disguised as engineering advice. > > The narrow slogan > > ???Given enough eyeballs, all bugs are shallow.??? > > The line became gospel. Short, clever, and apparently scientific, it > reduced open collaboration to a form of distributed debugging. Many eyes, > fewer bugs. Collaboration, in this light, was not a creative act but a > safety net. > > It was a perfect slogan for the audience he needed. Managers could measure > defects. Executives could chart release velocity. You could sell that to a > boardroom. ???Open source fixes bugs faster??? sounds like efficiency; ???open > source changes how humans organize??? sounds like insurrection. > > The trade > > So Raymond made a trade. He gave up the movement???s breadth for credibility. > The grand claim, that transparency breeds better design and deeper > understanding, became a smaller, safer one about quality assurance. And in > doing so, he made the revolution sound replicable. > > It worked. Netscape opened its code. ???Open source??? replaced ???free > software.??? Corporations joined the bazaar without ever entering the > community. They adopted the method, not the meaning. > > The diary as proof > > Even the long detour through fetchmail fits the pattern. It reads like > autobiography, but it???s really evidence: if this model works for me, it can > work for you. The diary is a case study, not merely an exposition. Raymond > wasn???t just documenting open development. He was demonstrating it. > > The legacy > > The quiet compromise of the essay is that by focusing on bugs instead of > ideology, Raymond made the unfamiliar familiar. He turned rebellion into > best practice. And in doing so, he helped open source escape the lab and > enter the market???but also stripped it of its soul. > > It split the movement. The free software camp clung to ethics; the open > source camp to efficiency. Each accused the other of missing the point. > > Perhaps both did. > > The real power of the bazaar wasn???t in its license or its process. It was > in the way it made people feel seen, the way a thousand strangers could > build something together and call it theirs. That???s what made the terminals > hum and the mailing lists sing. > > The real birth wasn???t a method or a manifesto. It was a new community, just > another example of what happens when people share a common need and work > together to make it happen. And it didn???t belong to either banner. It > belonged to everyone who showed up. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From tuhs at tuhs.org Sun Nov 2 01:57:37 2025 From: tuhs at tuhs.org (Marc Donner via TUHS) Date: Sat, 1 Nov 2025 11:57:37 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: A lot of what you say is appealing and resonates with me. Let me offer another dimension to help think about the evolution of CLI goodies: automation. In the early days of the Arpanet we got a wonderful ability - transfer a file from one computer to another without writing a tape and making a trip to the post office. FTP was wonderful. With administrative coherence we also got smoother integration with tools like rcp. Along with these things came rapid growth in the number of machines in a domain and the need to manage them coherently. Now we needed to transfer a bunch of files to a bunch of different machines. Enter rdist. (We will leave the security challenges to the side.). Suddenly we could establish a common system image for a large number of machines. We then discovered that not all machines should be absolutely identical, so we layered all sorts of machinery on top of rdist and its multifarious descendants so that we could keep various subtrees coherent. What we ended up with is a growing set of layered abstractions. At the bottom were some fairly simple pieces of machinery that did this or that on the bare OS. Next were a collection of abstractions that automated the orchestration of these underlying bits. Some of these abstractions turned out to be seminal innovations in and of themselves and were then used in developing yet another tier of abstractions and automations on top of the second tier. As time passed we layered more and more abstractions. Of course, from time to time we also looked at the chaotic pile of abstractions and attempted to streamline and simplify them, with varying levels of success. Best, Marc ===== mindthegapdialogs.com north-fork.info On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS wrote: > i'm a bit reluctant to post this here lest you rip it apart, but i'm guess > i'm ok with that if it happens. i'm more interested in learning the truth > than i am in being right. > > The Evolution of the Command Line: From Terseness to Expression > > 1. The Classical Unix Model (1970s–80s) > > cmd -flags arguments > > The early Unix commands embodied the ideal of “do one thing well.” Each > flag was terse and mnemonic (-l, -r), and each utility was atomic. The > shell provided composition through pipes. Commands like grep, cut, and sort > combined to perform a series of operations on the same data stream. > > 2. The GNU Era (late 80s–90s) > > cmd --long-simple --long-key=value [arguments] > > The GNU Project introduced long options to help people remember what the > terse flags meant. Common options like --help and --version made tools > self-describing. > > Strengths: clarity, accessibility, scriptability > Weaknesses: creeping featurism > > 3. The “Swiss Army Knife” Model (1990s–2000s) > > Next was consolidation. Developers shipped a single binary with multiple > subcommands: > > command subcommand [options] [arguments] > > Example: openssl x509 -in cert.pem -noout -text > > Each subcommand occupied its own domain, effectively creating namespaces. > This structure defined tools like git, svn, and openssl. > > Strengths: unified packaging, logical grouping > Weaknesses: internal inconsistency; subcommands evolved unevenly. > > 4. The Verb-Oriented CLI (2000s–Present) > > As CLIs matured, their design grew more linguistic. Tools like Docker, Git, > and Kubernetes introduced verb-oriented hierarchies: > > tool verb [object] [flags] > > Example: docker run -it ubuntu bash > > This mapped naturally to the mental model of performing actions: “Docker, > run this container.” “Git, commit this change.” Frameworks like Go’s Cobra > or Python’s Click standardized the pattern. > > Strengths: extensible, discoverable, self-documenting > Weaknesses: verbosity and conceptual overhead. A CLI became an ecosystem. > > 5. The Sententious Model > > When a domain grows too complex for neat hierarchies, a single command > becomes a compact expression of a workflow. Consider zfs, an elegant > example of declarative-imperative blending: > > zfs create -o compression=lz4 tank/data > It reads almost like a sentence: > > “Create a new dataset called tank/data with compression enabled using LZ4.” > > Each option plays a grammatical role: > > create — the verb > -o compression=lz4 — a property or adverbial modifier > tank/data — the object being acted upon > > One fluent expression defines what and how. The syntax is a kind of > expressive and efficient shell-native DSL. > > This phase of CLI design is baroque: not minimalist, not verbose, but > literary in its compression of meaning. > > 6. The Configuration-Driven CLI (Modern Era) > > Example: kubectl apply -f deployment.yaml > > Today’s tools often speak in declarative terms. Rather than specify every > step, you provide a desired state in a file, and the CLI enacts it. > > Strengths: scales elegantly in automation, integrates with APIs > Weaknesses: less immediacy; the human feedback loop grows distant. > > Across half a century of design, the command line has evolved from terse > incantations to expressive languages of intent. > From tuhs at tuhs.org Sun Nov 2 02:34:03 2025 From: tuhs at tuhs.org (Clem Cole via TUHS) Date: Sat, 1 Nov 2025 12:34:03 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: Marc, I agree. Like you, I think Phillips' observations resonate, but you nailed it with the drive for higher-level abstractions/being able to do more as better automation of a lower-level idea or facility. On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS wrote: > A lot of what you say is appealing and resonates with me. > > Let me offer another dimension to help think about the evolution of CLI > goodies: automation. > > In the early days of the Arpanet we got a wonderful ability - transfer a > file from one computer to another without writing a tape and making a trip > to the post office. FTP was wonderful. With administrative coherence we > also got smoother integration with tools like rcp. > > Along with these things came rapid growth in the number of machines in a > domain and the need to manage them coherently. Now we needed to transfer a > bunch of files to a bunch of different machines. Enter rdist. (We will > leave the security challenges to the side.). Suddenly we could establish a > common system image for a large number of machines. > > We then discovered that not all machines should be absolutely identical, so > we layered all sorts of machinery on top of rdist and its multifarious > descendants so that we could keep various subtrees coherent. > > What we ended up with is a growing set of layered abstractions. At the > bottom were some fairly simple pieces of machinery that did this or that on > the bare OS. Next were a collection of abstractions that automated the > orchestration of these underlying bits. Some of these abstractions turned > out to be seminal innovations in and of themselves and were then used in > developing yet another tier of abstractions and automations on top of the > second tier. > > As time passed we layered more and more abstractions. > > Of course, from time to time we also looked at the chaotic pile of > abstractions and attempted to streamline and simplify them, with varying > levels of success. > > Best, > > Marc > ===== > mindthegapdialogs.com > north-fork.info > > > On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS > wrote: > > > i'm a bit reluctant to post this here lest you rip it apart, but i'm > guess > > i'm ok with that if it happens. i'm more interested in learning the truth > > than i am in being right. > > > > The Evolution of the Command Line: From Terseness to Expression > > > > 1. The Classical Unix Model (1970s–80s) > > > > cmd -flags arguments > > > > The early Unix commands embodied the ideal of “do one thing well.” Each > > flag was terse and mnemonic (-l, -r), and each utility was atomic. The > > shell provided composition through pipes. Commands like grep, cut, and > sort > > combined to perform a series of operations on the same data stream. > > > > 2. The GNU Era (late 80s–90s) > > > > cmd --long-simple --long-key=value [arguments] > > > > The GNU Project introduced long options to help people remember what the > > terse flags meant. Common options like --help and --version made tools > > self-describing. > > > > Strengths: clarity, accessibility, scriptability > > Weaknesses: creeping featurism > > > > 3. The “Swiss Army Knife” Model (1990s–2000s) > > > > Next was consolidation. Developers shipped a single binary with multiple > > subcommands: > > > > command subcommand [options] [arguments] > > > > Example: openssl x509 -in cert.pem -noout -text > > > > Each subcommand occupied its own domain, effectively creating namespaces. > > This structure defined tools like git, svn, and openssl. > > > > Strengths: unified packaging, logical grouping > > Weaknesses: internal inconsistency; subcommands evolved unevenly. > > > > 4. The Verb-Oriented CLI (2000s–Present) > > > > As CLIs matured, their design grew more linguistic. Tools like Docker, > Git, > > and Kubernetes introduced verb-oriented hierarchies: > > > > tool verb [object] [flags] > > > > Example: docker run -it ubuntu bash > > > > This mapped naturally to the mental model of performing actions: “Docker, > > run this container.” “Git, commit this change.” Frameworks like Go’s > Cobra > > or Python’s Click standardized the pattern. > > > > Strengths: extensible, discoverable, self-documenting > > Weaknesses: verbosity and conceptual overhead. A CLI became an ecosystem. > > > > 5. The Sententious Model > > > > When a domain grows too complex for neat hierarchies, a single command > > becomes a compact expression of a workflow. Consider zfs, an elegant > > example of declarative-imperative blending: > > > > zfs create -o compression=lz4 tank/data > > It reads almost like a sentence: > > > > “Create a new dataset called tank/data with compression enabled using > LZ4.” > > > > Each option plays a grammatical role: > > > > create — the verb > > -o compression=lz4 — a property or adverbial modifier > > tank/data — the object being acted upon > > > > One fluent expression defines what and how. The syntax is a kind of > > expressive and efficient shell-native DSL. > > > > This phase of CLI design is baroque: not minimalist, not verbose, but > > literary in its compression of meaning. > > > > 6. The Configuration-Driven CLI (Modern Era) > > > > Example: kubectl apply -f deployment.yaml > > > > Today’s tools often speak in declarative terms. Rather than specify every > > step, you provide a desired state in a file, and the CLI enacts it. > > > > Strengths: scales elegantly in automation, integrates with APIs > > Weaknesses: less immediacy; the human feedback loop grows distant. > > > > Across half a century of design, the command line has evolved from terse > > incantations to expressive languages of intent. > > > From tuhs at tuhs.org Sun Nov 2 02:45:14 2025 From: tuhs at tuhs.org (Clem Cole via TUHS) Date: Sat, 1 Nov 2025 12:45:14 -0400 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: Message-ID: Hrrmph. IMO: This is trying to fit the data over the graph you want. I never agreed with ESR's model. Linux was (and continues to be) a Cathedral. It just has different master builders than BSD, SunOS, SVR4, VMS, and NT did. In all cases, there were two core drivers: 1.) control [he got to say what was going to be there] and 2.) the economics of the time when each became popular. e.g., Richard Hamming's parody of Isaac Newton's famous quote: "Mathematicians stand on each other's shoulders and computer scientists stand on each other's toes," is the driver for each team; and the outcome is less driven by design and architecture than it is by being *the most cost-effective solution to something a user of the technology requires.* On Sat, Nov 1, 2025 at 10:42 AM A. P. Garcia via TUHS wrote: > I wanted to share three brief essays I wrote on LinkedIn. I hope it's > appropriate to share them here. > > The Cathedral, the Bazaar, and the Marketplace > > People sometimes say Linux suffers from a “Not Invented Here” problem. They > pnt to technologies like DTrace and ZFS — born in Solaris, admired by > Linux, but never fully adopted. FreeBSD, macOS, even Windows embraced > DTrace. Linux went its own way, creating eBPF and Btrfs instead. > > At first glance, it looks like stubbornness. But look closer, and you see a > deeper truth about how different systems and their communities evolve. > > The Cathedral: Solaris and the Dream of Perfection > > Solaris and the BSDs were cathedral projects: elegant, coherent, and built > to last. Their philosophy was architectural: design it perfectly once, and > maintain it forever. That stability made masterpieces like ZFS and DTrace > possible. > > Cathedrals are slow to change. They preserve beauty, not momentum. > > The Bazaar: Linux and the Art of Reinvention > > Linux took the opposite path. Its ecosystem is messy, distributed, and > loud, a bazaar where competing ideas coexist until one wins by survival, > not decree. It doesn’t import technologies wholesale. It reinvents them > from first principles. > > That’s why instead of adopting DTrace, Linux built eBPF, a programmable > virtual machine for tracing, networking, and observability. It’s more > complex, less elegant, but more adaptable. > > This isn’t “Not Invented Here.” It’s “Invented Anew, Every Time.” It’s a > culture that prizes autonomy over elegance, vitality over symmetry. > > The Marketplace: Microsoft and the Pragmatism of Scale > > Then there’s Microsoft, once the cathedral’s rival, now a marketplace of > its own. Its genius has never been invention but absorption: taking good > ideas from elsewhere and integrating them into something cohesive. > > PowerShell drew from Unix shells and .NET reflection. NTFS inherited DNA > from VMS. Today, Microsoft ships Linux inside Windows, Edge on Chromium, > and hosts GitHub — the beating heart of open source. > > Yet the uptake of Microsoft’s own open-source tools among Linux users has > been modest. You can install PowerShell on Ubuntu or .NET on Debian, but > few do. Not because the tools are bad, but because open source isn’t just a > license, it’s a language. > > Microsoft’s tools still speak in the idioms of Windows. They solve problems > that feel foreign in Unix hands. You can open-source the code, but you > can’t open-source the culture overnight. > > Three Philosophies, One Ecosystem > > - Solaris / BSD: Design it perfectly, then preserve it. > - Linux: Rebuild it constantly, and let it evolve. > - Microsoft: Adopt it broadly, and make it familiar. > > Each model has its genius, and its limits. Solaris gave us clarity but > stagnated. Linux gave us chaos but endurance. Microsoft gave us cohesion, > and at times, a touch of the blasé. > > But in 2025, the walls between them have thinned. Linux runs in Azure. eBPF > runs on Windows. Solaris’s spirit lives on in every file system that > promises self-healing storage. > > The world has evolved since ESR first told us about the Cathedral and the > Bazaar. > > > > Before the Bazaar > The lineage of open collaboration from Bell Labs to the AI Lab to Linux. > > In the early days, Unix was a community. Not a corporate product or a > stealth research project, but a loose fellowship of programmers trading > code through tape reels. Thompson and Ritchie gave it away for a nominal > fee, and universities adopted it because it was small, elegant, and > instructional. The code moved by post and by modem; ideas moved even > faster. Every new utility carried someone’s initials in the source > comments, a quiet signature of a gift freely given. > > Meanwhile, at MIT, another tribe of hackers lived by a similar rhythm, > though they worked on different machines. Their home was the AI Lab, and > their world ran on PDP-10s under ITS, a timesharing system they had shaped > to their own image. It was a place where curiosity outranked hierarchy, > where anyone could read or patch any program, and where a clever hack was > its own kind of currency. > > For a while, both cultures thrived on openness. But while the Unix world > diffused into hundreds of campuses and companies, the AI Lab was a more > fragile ecosystem. When the hardware aged and the market closed in, that > ecosystem broke. What had been an everyday freedom, editing each other’s > code, suddenly became trespass. > > The lab was dying, and everyone could feel it. The old PDP-10s hummed like > relics, and the laughter that used to spill from the terminals had thinned > to the occasional keystroke. A printer driver had been locked away behind a > corporate contract; a colleague left for a company that paid him to keep > quiet. The code that once bound the room together was vanishing into sealed > disks and nondisclosure. > > Richard Matthew Stallman stood in the middle of that silence and made a > decision that would outlast the machines. If the lab was gone, he would > build another. One not bound by walls or employers. One where the source > itself would be the meeting place. > > For all the talk of freedom you may have heard from rms, GNU wasn’t born > from utopia; it was born from grief. It was his hope that a community could > be written back into existence, line by line. > > Before him, long before the Internet turned collaboration into a torrent, > there was a room at Bell Labs Center 1127. That was the first bazaar: > quiet, fluorescent, lined with a PDP-11 and teletype terminals. The people > who used Unix were the same ones who built it. > > When Ken Thompson or Dennis Ritchie wrote a tool, it wasn’t for customers; > it was for the colleague one door down. Brian Kernighan would stop by with > an idea for a text filter. Joe Ossanna needed better document formatting. > Doug McIlroy wanted to teach the machines how to speak in little, > composable verbs. By lunchtime, half the lab was using the new tool, and by > evening, someone else had improved it. > > The same impulse stretches now across continents instead of offices. The > bazaar simply scaled up the Unix room: at Bell Labs, at the AI Lab at MIT, > and now in your every git pull. > > > > The Rhetoric of the Bazaar > > How Eric S. Raymond sold open source as a process improvement. > > When The Cathedral and the Bazaar appeared in the late 1990s, it read like > field notes from a new frontier. Raymond seemed to be explaining why > Linux’s sprawling, volunteer army had outpaced corporate software. But the > essay was more than observation. It was persuasion dressed as ethnography, > a cultural revolution disguised as engineering advice. > > The narrow slogan > > “Given enough eyeballs, all bugs are shallow.” > > The line became gospel. Short, clever, and apparently scientific, it > reduced open collaboration to a form of distributed debugging. Many eyes, > fewer bugs. Collaboration, in this light, was not a creative act but a > safety net. > > It was a perfect slogan for the audience he needed. Managers could measure > defects. Executives could chart release velocity. You could sell that to a > boardroom. “Open source fixes bugs faster” sounds like efficiency; “open > source changes how humans organize” sounds like insurrection. > > The trade > > So Raymond made a trade. He gave up the movement’s breadth for credibility. > The grand claim, that transparency breeds better design and deeper > understanding, became a smaller, safer one about quality assurance. And in > doing so, he made the revolution sound replicable. > > It worked. Netscape opened its code. “Open source” replaced “free > software.” Corporations joined the bazaar without ever entering the > community. They adopted the method, not the meaning. > > The diary as proof > > Even the long detour through fetchmail fits the pattern. It reads like > autobiography, but it’s really evidence: if this model works for me, it can > work for you. The diary is a case study, not merely an exposition. Raymond > wasn’t just documenting open development. He was demonstrating it. > > The legacy > > The quiet compromise of the essay is that by focusing on bugs instead of > ideology, Raymond made the unfamiliar familiar. He turned rebellion into > best practice. And in doing so, he helped open source escape the lab and > enter the market—but also stripped it of its soul. > > It split the movement. The free software camp clung to ethics; the open > source camp to efficiency. Each accused the other of missing the point. > > Perhaps both did. > > The real power of the bazaar wasn’t in its license or its process. It was > in the way it made people feel seen, the way a thousand strangers could > build something together and call it theirs. That’s what made the terminals > hum and the mailing lists sing. > > The real birth wasn’t a method or a manifesto. It was a new community, just > another example of what happens when people share a common need and work > together to make it happen. And it didn’t belong to either banner. It > belonged to everyone who showed up. > From tuhs at tuhs.org Sun Nov 2 02:58:06 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 12:58:06 -0400 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: Message-ID: Damn, Clem. Lay down the truth. I really appreciate this perspective. It grounds the idealism of ESR’s framing (and mine, to an extent) in the practical realities of control and economics. On Sat, Nov 1, 2025, 12:45 PM Clem Cole wrote: > Hrrmph. IMO: This is trying to fit the data over the graph you want. > > I never agreed with ESR's model. Linux was (and continues to be) a > Cathedral. It just has different master builders than BSD, SunOS, SVR4, > VMS, and NT did. > > In all cases, there were two core drivers: 1.) control [he got to say what > was going to be there] and 2.) the economics of the time when each became > popular. > > e.g., Richard Hamming's parody of Isaac Newton's famous quote: "Mathematicians > stand on each other's shoulders and computer scientists stand on each > other's toes," is the driver for each team; and the outcome is less > driven by design and architecture than it is by being *the > most cost-effective solution to something a user of the technology > requires.* > > On Sat, Nov 1, 2025 at 10:42 AM A. P. Garcia via TUHS > wrote: > >> I wanted to share three brief essays I wrote on LinkedIn. I hope it's >> appropriate to share them here. >> >> The Cathedral, the Bazaar, and the Marketplace >> >> People sometimes say Linux suffers from a “Not Invented Here” problem. >> They >> pnt to technologies like DTrace and ZFS — born in Solaris, admired by >> Linux, but never fully adopted. FreeBSD, macOS, even Windows embraced >> DTrace. Linux went its own way, creating eBPF and Btrfs instead. >> >> At first glance, it looks like stubbornness. But look closer, and you see >> a >> deeper truth about how different systems and their communities evolve. >> >> The Cathedral: Solaris and the Dream of Perfection >> >> Solaris and the BSDs were cathedral projects: elegant, coherent, and built >> to last. Their philosophy was architectural: design it perfectly once, and >> maintain it forever. That stability made masterpieces like ZFS and DTrace >> possible. >> >> Cathedrals are slow to change. They preserve beauty, not momentum. >> >> The Bazaar: Linux and the Art of Reinvention >> >> Linux took the opposite path. Its ecosystem is messy, distributed, and >> loud, a bazaar where competing ideas coexist until one wins by survival, >> not decree. It doesn’t import technologies wholesale. It reinvents them >> from first principles. >> >> That’s why instead of adopting DTrace, Linux built eBPF, a programmable >> virtual machine for tracing, networking, and observability. It’s more >> complex, less elegant, but more adaptable. >> >> This isn’t “Not Invented Here.” It’s “Invented Anew, Every Time.” It’s a >> culture that prizes autonomy over elegance, vitality over symmetry. >> >> The Marketplace: Microsoft and the Pragmatism of Scale >> >> Then there’s Microsoft, once the cathedral’s rival, now a marketplace of >> its own. Its genius has never been invention but absorption: taking good >> ideas from elsewhere and integrating them into something cohesive. >> >> PowerShell drew from Unix shells and .NET reflection. NTFS inherited DNA >> from VMS. Today, Microsoft ships Linux inside Windows, Edge on Chromium, >> and hosts GitHub — the beating heart of open source. >> >> Yet the uptake of Microsoft’s own open-source tools among Linux users has >> been modest. You can install PowerShell on Ubuntu or .NET on Debian, but >> few do. Not because the tools are bad, but because open source isn’t just >> a >> license, it’s a language. >> >> Microsoft’s tools still speak in the idioms of Windows. They solve >> problems >> that feel foreign in Unix hands. You can open-source the code, but you >> can’t open-source the culture overnight. >> >> Three Philosophies, One Ecosystem >> >> - Solaris / BSD: Design it perfectly, then preserve it. >> - Linux: Rebuild it constantly, and let it evolve. >> - Microsoft: Adopt it broadly, and make it familiar. >> >> Each model has its genius, and its limits. Solaris gave us clarity but >> stagnated. Linux gave us chaos but endurance. Microsoft gave us cohesion, >> and at times, a touch of the blasé. >> >> But in 2025, the walls between them have thinned. Linux runs in Azure. >> eBPF >> runs on Windows. Solaris’s spirit lives on in every file system that >> promises self-healing storage. >> >> The world has evolved since ESR first told us about the Cathedral and the >> Bazaar. >> >> >> >> Before the Bazaar >> The lineage of open collaboration from Bell Labs to the AI Lab to Linux. >> >> In the early days, Unix was a community. Not a corporate product or a >> stealth research project, but a loose fellowship of programmers trading >> code through tape reels. Thompson and Ritchie gave it away for a nominal >> fee, and universities adopted it because it was small, elegant, and >> instructional. The code moved by post and by modem; ideas moved even >> faster. Every new utility carried someone’s initials in the source >> comments, a quiet signature of a gift freely given. >> >> Meanwhile, at MIT, another tribe of hackers lived by a similar rhythm, >> though they worked on different machines. Their home was the AI Lab, and >> their world ran on PDP-10s under ITS, a timesharing system they had shaped >> to their own image. It was a place where curiosity outranked hierarchy, >> where anyone could read or patch any program, and where a clever hack was >> its own kind of currency. >> >> For a while, both cultures thrived on openness. But while the Unix world >> diffused into hundreds of campuses and companies, the AI Lab was a more >> fragile ecosystem. When the hardware aged and the market closed in, that >> ecosystem broke. What had been an everyday freedom, editing each other’s >> code, suddenly became trespass. >> >> The lab was dying, and everyone could feel it. The old PDP-10s hummed like >> relics, and the laughter that used to spill from the terminals had thinned >> to the occasional keystroke. A printer driver had been locked away behind >> a >> corporate contract; a colleague left for a company that paid him to keep >> quiet. The code that once bound the room together was vanishing into >> sealed >> disks and nondisclosure. >> >> Richard Matthew Stallman stood in the middle of that silence and made a >> decision that would outlast the machines. If the lab was gone, he would >> build another. One not bound by walls or employers. One where the source >> itself would be the meeting place. >> >> For all the talk of freedom you may have heard from rms, GNU wasn’t born >> from utopia; it was born from grief. It was his hope that a community >> could >> be written back into existence, line by line. >> >> Before him, long before the Internet turned collaboration into a torrent, >> there was a room at Bell Labs Center 1127. That was the first bazaar: >> quiet, fluorescent, lined with a PDP-11 and teletype terminals. The people >> who used Unix were the same ones who built it. >> >> When Ken Thompson or Dennis Ritchie wrote a tool, it wasn’t for customers; >> it was for the colleague one door down. Brian Kernighan would stop by with >> an idea for a text filter. Joe Ossanna needed better document formatting. >> Doug McIlroy wanted to teach the machines how to speak in little, >> composable verbs. By lunchtime, half the lab was using the new tool, and >> by >> evening, someone else had improved it. >> >> The same impulse stretches now across continents instead of offices. The >> bazaar simply scaled up the Unix room: at Bell Labs, at the AI Lab at MIT, >> and now in your every git pull. >> >> >> >> The Rhetoric of the Bazaar >> >> How Eric S. Raymond sold open source as a process improvement. >> >> When The Cathedral and the Bazaar appeared in the late 1990s, it read like >> field notes from a new frontier. Raymond seemed to be explaining why >> Linux’s sprawling, volunteer army had outpaced corporate software. But the >> essay was more than observation. It was persuasion dressed as ethnography, >> a cultural revolution disguised as engineering advice. >> >> The narrow slogan >> >> “Given enough eyeballs, all bugs are shallow.” >> >> The line became gospel. Short, clever, and apparently scientific, it >> reduced open collaboration to a form of distributed debugging. Many eyes, >> fewer bugs. Collaboration, in this light, was not a creative act but a >> safety net. >> >> It was a perfect slogan for the audience he needed. Managers could measure >> defects. Executives could chart release velocity. You could sell that to a >> boardroom. “Open source fixes bugs faster” sounds like efficiency; “open >> source changes how humans organize” sounds like insurrection. >> >> The trade >> >> So Raymond made a trade. He gave up the movement’s breadth for >> credibility. >> The grand claim, that transparency breeds better design and deeper >> understanding, became a smaller, safer one about quality assurance. And in >> doing so, he made the revolution sound replicable. >> >> It worked. Netscape opened its code. “Open source” replaced “free >> software.” Corporations joined the bazaar without ever entering the >> community. They adopted the method, not the meaning. >> >> The diary as proof >> >> Even the long detour through fetchmail fits the pattern. It reads like >> autobiography, but it’s really evidence: if this model works for me, it >> can >> work for you. The diary is a case study, not merely an exposition. Raymond >> wasn’t just documenting open development. He was demonstrating it. >> >> The legacy >> >> The quiet compromise of the essay is that by focusing on bugs instead of >> ideology, Raymond made the unfamiliar familiar. He turned rebellion into >> best practice. And in doing so, he helped open source escape the lab and >> enter the market—but also stripped it of its soul. >> >> It split the movement. The free software camp clung to ethics; the open >> source camp to efficiency. Each accused the other of missing the point. >> >> Perhaps both did. >> >> The real power of the bazaar wasn’t in its license or its process. It was >> in the way it made people feel seen, the way a thousand strangers could >> build something together and call it theirs. That’s what made the >> terminals >> hum and the mailing lists sing. >> >> The real birth wasn’t a method or a manifesto. It was a new community, >> just >> another example of what happens when people share a common need and work >> together to make it happen. And it didn’t belong to either banner. It >> belonged to everyone who showed up. >> > From tuhs at tuhs.org Sun Nov 2 03:05:47 2025 From: tuhs at tuhs.org (Alan Coopersmith via TUHS) Date: Sat, 1 Nov 2025 10:05:47 -0700 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: Message-ID: <196f66ae-5f43-44b4-bd61-d661630a0970@oracle.com> On 11/1/25 07:42, A. P. Garcia via TUHS wrote: > Linux took the opposite path. Its ecosystem is messy, distributed, and > loud, a bazaar where competing ideas coexist until one wins by survival, > not decree. It doesn’t import technologies wholesale. It reinvents them > from first principles. > > That’s why instead of adopting DTrace, Linux built eBPF, a programmable > virtual machine for tracing, networking, and observability. It’s more > complex, less elegant, but more adaptable. Except of course, Linux built eBPF on top of BPF, a technology imported wholesale from BSD. The difference between how Linux looked at Dtrace & BPF is one of license terms, not philosophy - they were willing to accept BSD-licensed imports, but not CDDL-licensed ones. -- -Alan Coopersmith- alan.coopersmith at oracle.com Oracle Solaris Engineering - https://blogs.oracle.com/solaris From tuhs at tuhs.org Sun Nov 2 03:08:17 2025 From: tuhs at tuhs.org (Warner Losh via TUHS) Date: Sat, 1 Nov 2025 11:08:17 -0600 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: What about the JSON trend. Programs that are complex datasets mediated by jq. On Sat, Nov 1, 2025, 10:34 AM Clem Cole via TUHS wrote: > Marc, I agree. Like you, I think Phillips' observations resonate, but you > nailed it with the drive for higher-level abstractions/being able to do > more as better automation of a lower-level idea or facility. > > > On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS > wrote: > > > A lot of what you say is appealing and resonates with me. > > > > Let me offer another dimension to help think about the evolution of CLI > > goodies: automation. > > > > In the early days of the Arpanet we got a wonderful ability - transfer a > > file from one computer to another without writing a tape and making a > trip > > to the post office. FTP was wonderful. With administrative coherence we > > also got smoother integration with tools like rcp. > > > > Along with these things came rapid growth in the number of machines in a > > domain and the need to manage them coherently. Now we needed to > transfer a > > bunch of files to a bunch of different machines. Enter rdist. (We will > > leave the security challenges to the side.). Suddenly we could > establish a > > common system image for a large number of machines. > > > > We then discovered that not all machines should be absolutely identical, > so > > we layered all sorts of machinery on top of rdist and its multifarious > > descendants so that we could keep various subtrees coherent. > > > > What we ended up with is a growing set of layered abstractions. At the > > bottom were some fairly simple pieces of machinery that did this or that > on > > the bare OS. Next were a collection of abstractions that automated the > > orchestration of these underlying bits. Some of these abstractions > turned > > out to be seminal innovations in and of themselves and were then used in > > developing yet another tier of abstractions and automations on top of the > > second tier. > > > > As time passed we layered more and more abstractions. > > > > Of course, from time to time we also looked at the chaotic pile of > > abstractions and attempted to streamline and simplify them, with varying > > levels of success. > > > > Best, > > > > Marc > > ===== > > mindthegapdialogs.com > > north-fork.info > > > > > > On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS > > wrote: > > > > > i'm a bit reluctant to post this here lest you rip it apart, but i'm > > guess > > > i'm ok with that if it happens. i'm more interested in learning the > truth > > > than i am in being right. > > > > > > The Evolution of the Command Line: From Terseness to Expression > > > > > > 1. The Classical Unix Model (1970s–80s) > > > > > > cmd -flags arguments > > > > > > The early Unix commands embodied the ideal of “do one thing well.” Each > > > flag was terse and mnemonic (-l, -r), and each utility was atomic. The > > > shell provided composition through pipes. Commands like grep, cut, and > > sort > > > combined to perform a series of operations on the same data stream. > > > > > > 2. The GNU Era (late 80s–90s) > > > > > > cmd --long-simple --long-key=value [arguments] > > > > > > The GNU Project introduced long options to help people remember what > the > > > terse flags meant. Common options like --help and --version made tools > > > self-describing. > > > > > > Strengths: clarity, accessibility, scriptability > > > Weaknesses: creeping featurism > > > > > > 3. The “Swiss Army Knife” Model (1990s–2000s) > > > > > > Next was consolidation. Developers shipped a single binary with > multiple > > > subcommands: > > > > > > command subcommand [options] [arguments] > > > > > > Example: openssl x509 -in cert.pem -noout -text > > > > > > Each subcommand occupied its own domain, effectively creating > namespaces. > > > This structure defined tools like git, svn, and openssl. > > > > > > Strengths: unified packaging, logical grouping > > > Weaknesses: internal inconsistency; subcommands evolved unevenly. > > > > > > 4. The Verb-Oriented CLI (2000s–Present) > > > > > > As CLIs matured, their design grew more linguistic. Tools like Docker, > > Git, > > > and Kubernetes introduced verb-oriented hierarchies: > > > > > > tool verb [object] [flags] > > > > > > Example: docker run -it ubuntu bash > > > > > > This mapped naturally to the mental model of performing actions: > “Docker, > > > run this container.” “Git, commit this change.” Frameworks like Go’s > > Cobra > > > or Python’s Click standardized the pattern. > > > > > > Strengths: extensible, discoverable, self-documenting > > > Weaknesses: verbosity and conceptual overhead. A CLI became an > ecosystem. > > > > > > 5. The Sententious Model > > > > > > When a domain grows too complex for neat hierarchies, a single command > > > becomes a compact expression of a workflow. Consider zfs, an elegant > > > example of declarative-imperative blending: > > > > > > zfs create -o compression=lz4 tank/data > > > It reads almost like a sentence: > > > > > > “Create a new dataset called tank/data with compression enabled using > > LZ4.” > > > > > > Each option plays a grammatical role: > > > > > > create — the verb > > > -o compression=lz4 — a property or adverbial modifier > > > tank/data — the object being acted upon > > > > > > One fluent expression defines what and how. The syntax is a kind of > > > expressive and efficient shell-native DSL. > > > > > > This phase of CLI design is baroque: not minimalist, not verbose, but > > > literary in its compression of meaning. > > > > > > 6. The Configuration-Driven CLI (Modern Era) > > > > > > Example: kubectl apply -f deployment.yaml > > > > > > Today’s tools often speak in declarative terms. Rather than specify > every > > > step, you provide a desired state in a file, and the CLI enacts it. > > > > > > Strengths: scales elegantly in automation, integrates with APIs > > > Weaknesses: less immediacy; the human feedback loop grows distant. > > > > > > Across half a century of design, the command line has evolved from > terse > > > incantations to expressive languages of intent. > > > > > > From tuhs at tuhs.org Sun Nov 2 03:11:49 2025 From: tuhs at tuhs.org (Marc Rochkind via TUHS) Date: Sat, 1 Nov 2025 11:11:49 -0600 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: Message-ID: You say about Linux: "It doesn’t import technologies wholesale. It reinvents them from first principles." Maybe that's true of the implementation, but the design is a copy. I'd say that about GNU also. The design is the hard part, and that's what the UNIX invention was. Remember that even Thompson and Ritchie implemented UNIX twice. Regarding that statement about eyeballs and shallow bugs, I don't think there's much evidence to support it. (Very few, if any, principles of software development have been subjected to scientific study, unlike, say, medicine or civil engineering.) With open source, bugs tend to be discovered by users, who might also be developers. It's better for bugs to be discovered by testers. Marc Rochkind From tuhs at tuhs.org Sun Nov 2 03:13:33 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 13:13:33 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: Clem, Marc, this is incredibly helpful context, thank you. Clem, your “Linux is still a cathedral, just with different master builders” hit me hard, because it immediately reframes this not as mythology but governance and economics. Who gets to steer. Who keeps it cheap enough to win. Marc, your point about make as externalized memory lit up the other half of my brain. The idea that half of sysadmin life was just not forgetting the exact incantation from yesterday. yes. I’ve lived a tiny, modern version of that and it still hurts. Where this all lands for me is: if Unix historically survived because we kept capturing practice in repeatable form (make, shell, cron, rc scripts, etc.), maybe the next logical step is to expose the machine itself in those terms. What I’ve been sketching out with a friend is a shell where the fundamental objects aren’t strings or ad-hoc JSON, but live views of kernel structures. For example, every process becomes a Task object that conceptually maps to task_struct: t = kernel.tasks.find(pid=1234) t.pid -> 1234 t.comm -> "sshd" t.state -> "TASK_RUNNING" t.parent.pid -> 1 t.children -> [ ... ] t.kill(SIGTERM) t.set_prio(80) Under the hood it’s not magic — it’s just reading /proc/1234/*, assembling a stable “Task” view from the pieces the kernel already exports, and giving you safe verbs that wrap the normal syscalls (kill(2), cgroup moves, etc.). Same idea for network interfaces (struct net_device), mounts / superblocks (struct super_block), open sockets (struct sock), etc. You’d get objects like NetIf, Mount, Socket, each with fields and sensible methods: iface = kernel.net.ifaces["eth0"] iface.mtu -> 1500 iface.rx_bytes -> 123456789 iface.addrs_v4 -> ["192.0.2.10/24"] iface.up() iface.set_mtu(9000) iface.add_addr("192.0.2.99/24") The goal here is not to invent some shiny abstraction layer — it’s almost the opposite. It’s to acknowledge, honestly, “this is how the kernel already thinks about the world,” and then hand that to the operator as a first-class, queryable vocabulary. Why I think this lines up with both of your notes: • Clem’s control point — this becomes the control surface. You keep the cathedral (Linus, subsystem maintainers, etc.), but you finally give the folks in production a coherent, inspectable, scriptable view of that cathedral’s state instead of twenty tiny tools with incompatible flags. • Marc’s memory point — this becomes institutional memory. Instead of “what was that five-stage awk pipeline Karl wrote in ’97 to find stuck tasks?”, you ask the system: kernel.tasks .where('$.state == "TASK_UNINTERRUPTIBLE" && $.waiting_on == "io_schedule"') .group('$.cgroup') and you get structured results you can act on. That knowledge survives handoff. The other (slightly mind-blowing) side effect is that the same interface could be pointed at a crash dump or a snapshot, so postmortem triage could look exactly like live triage. I’d love to hear if this resonates with your lived experience, or if you’d say “nice dream, kid, but here’s why it falls apart in the real world.” Because to me it feels like the same thread you both pulled on: we’ve always been trying to capture practice so we can hand it to the next person without losing our minds. Phil On Sat, Nov 1, 2025, 12:34 PM Clem Cole wrote: > Marc, I agree. Like you, I think Phillips' observations resonate, but you > nailed it with the drive for higher-level abstractions/being able to do > more as better automation of a lower-level idea or facility. > > > On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS > wrote: > >> A lot of what you say is appealing and resonates with me. >> >> Let me offer another dimension to help think about the evolution of CLI >> goodies: automation. >> >> In the early days of the Arpanet we got a wonderful ability - transfer a >> file from one computer to another without writing a tape and making a trip >> to the post office. FTP was wonderful. With administrative coherence we >> also got smoother integration with tools like rcp. >> >> Along with these things came rapid growth in the number of machines in a >> domain and the need to manage them coherently. Now we needed to transfer >> a >> bunch of files to a bunch of different machines. Enter rdist. (We will >> leave the security challenges to the side.). Suddenly we could establish >> a >> common system image for a large number of machines. >> >> We then discovered that not all machines should be absolutely identical, >> so >> we layered all sorts of machinery on top of rdist and its multifarious >> descendants so that we could keep various subtrees coherent. >> >> What we ended up with is a growing set of layered abstractions. At the >> bottom were some fairly simple pieces of machinery that did this or that >> on >> the bare OS. Next were a collection of abstractions that automated the >> orchestration of these underlying bits. Some of these abstractions turned >> out to be seminal innovations in and of themselves and were then used in >> developing yet another tier of abstractions and automations on top of the >> second tier. >> >> As time passed we layered more and more abstractions. >> >> Of course, from time to time we also looked at the chaotic pile of >> abstractions and attempted to streamline and simplify them, with varying >> levels of success. >> >> Best, >> >> Marc >> ===== >> mindthegapdialogs.com >> north-fork.info >> >> >> On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS >> wrote: >> >> > i'm a bit reluctant to post this here lest you rip it apart, but i'm >> guess >> > i'm ok with that if it happens. i'm more interested in learning the >> truth >> > than i am in being right. >> > >> > The Evolution of the Command Line: From Terseness to Expression >> > >> > 1. The Classical Unix Model (1970s–80s) >> > >> > cmd -flags arguments >> > >> > The early Unix commands embodied the ideal of “do one thing well.” Each >> > flag was terse and mnemonic (-l, -r), and each utility was atomic. The >> > shell provided composition through pipes. Commands like grep, cut, and >> sort >> > combined to perform a series of operations on the same data stream. >> > >> > 2. The GNU Era (late 80s–90s) >> > >> > cmd --long-simple --long-key=value [arguments] >> > >> > The GNU Project introduced long options to help people remember what the >> > terse flags meant. Common options like --help and --version made tools >> > self-describing. >> > >> > Strengths: clarity, accessibility, scriptability >> > Weaknesses: creeping featurism >> > >> > 3. The “Swiss Army Knife” Model (1990s–2000s) >> > >> > Next was consolidation. Developers shipped a single binary with multiple >> > subcommands: >> > >> > command subcommand [options] [arguments] >> > >> > Example: openssl x509 -in cert.pem -noout -text >> > >> > Each subcommand occupied its own domain, effectively creating >> namespaces. >> > This structure defined tools like git, svn, and openssl. >> > >> > Strengths: unified packaging, logical grouping >> > Weaknesses: internal inconsistency; subcommands evolved unevenly. >> > >> > 4. The Verb-Oriented CLI (2000s–Present) >> > >> > As CLIs matured, their design grew more linguistic. Tools like Docker, >> Git, >> > and Kubernetes introduced verb-oriented hierarchies: >> > >> > tool verb [object] [flags] >> > >> > Example: docker run -it ubuntu bash >> > >> > This mapped naturally to the mental model of performing actions: >> “Docker, >> > run this container.” “Git, commit this change.” Frameworks like Go’s >> Cobra >> > or Python’s Click standardized the pattern. >> > >> > Strengths: extensible, discoverable, self-documenting >> > Weaknesses: verbosity and conceptual overhead. A CLI became an >> ecosystem. >> > >> > 5. The Sententious Model >> > >> > When a domain grows too complex for neat hierarchies, a single command >> > becomes a compact expression of a workflow. Consider zfs, an elegant >> > example of declarative-imperative blending: >> > >> > zfs create -o compression=lz4 tank/data >> > It reads almost like a sentence: >> > >> > “Create a new dataset called tank/data with compression enabled using >> LZ4.” >> > >> > Each option plays a grammatical role: >> > >> > create — the verb >> > -o compression=lz4 — a property or adverbial modifier >> > tank/data — the object being acted upon >> > >> > One fluent expression defines what and how. The syntax is a kind of >> > expressive and efficient shell-native DSL. >> > >> > This phase of CLI design is baroque: not minimalist, not verbose, but >> > literary in its compression of meaning. >> > >> > 6. The Configuration-Driven CLI (Modern Era) >> > >> > Example: kubectl apply -f deployment.yaml >> > >> > Today’s tools often speak in declarative terms. Rather than specify >> every >> > step, you provide a desired state in a file, and the CLI enacts it. >> > >> > Strengths: scales elegantly in automation, integrates with APIs >> > Weaknesses: less immediacy; the human feedback loop grows distant. >> > >> > Across half a century of design, the command line has evolved from terse >> > incantations to expressive languages of intent. >> > >> > From tuhs at tuhs.org Sun Nov 2 03:29:48 2025 From: tuhs at tuhs.org (Arnold Robbins via TUHS) Date: Sat, 01 Nov 2025 11:29:48 -0600 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: <202511011729.5A1HTmRR026088@freefriends.org> "A. P. Garcia via TUHS" wrote: > What I’ve been sketching out with a friend is a shell where the fundamental > objects aren’t strings or ad-hoc JSON, but live views of kernel structures. > > [...] > > I’d love to hear if this resonates with your lived experience, or if you’d > say “nice dream, kid, but here’s why it falls apart in the real world.” Nice dream, kid, but here's why it falls apart in the real world: Most developers don't need to know that level of non-abstraction or care to work at that level. If you're building product, you hook things together with shell (or other) scripts. What you're proposing may be dynamite for sysadmins trying to troubleshoot, but that's a small percentage of the potential user base. Not to mention it's linux-only and won't port to *BSD, macOS or what-have- you. Now if Linux sysadmins IS your target audience, go for it. My two cents, of course, Arnold From tuhs at tuhs.org Sun Nov 2 03:41:40 2025 From: tuhs at tuhs.org (Luther Johnson via TUHS) Date: Sat, 1 Nov 2025 10:41:40 -0700 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: <634840e3-bbbb-5dc3-fe47-eb7f6a5eb7b7@makerlisp.com> Pardon me if this comment rubs anyone the wrong way, but this discussion seems a little like trying to retrofit intelligent design on the evolution of species. All things evolve, in somewhat arbitrary and not necessarily purposeful ways, in the context of their environment and immediate stimuli. Later, as a particular evolution proves useful to survival in the long run, it's tempting to say "the species grew an extra arm in order to compete and survive", but of course the species itself, with or without any awareness of it, has no control over its mutations. Well in writing software we do have a little more control, but we usually do not have as much foresight as we will later attribute to those efforts with the benefit of hindsight. I do agree with one thread in this discussion though. Any kind of energy injected into the system will tend to yield more of the same. So the emotional and aesthetic motivations of Unix's earliest contributors will create responses from people who care about the same sorts of things, and derive the same kind of joy from their efforts, that's something that I think we can count on. On 11/01/2025 10:13 AM, A. P. Garcia via TUHS wrote: > Clem, Marc, this is incredibly helpful context, thank you. > > Clem, your “Linux is still a cathedral, just with different master > builders” hit me hard, because it immediately reframes this not as > mythology but governance and economics. Who gets to steer. Who keeps it > cheap enough to win. > > Marc, your point about make as externalized memory lit up the other half of > my brain. The idea that half of sysadmin life was just not forgetting the > exact incantation from yesterday. yes. I’ve lived a tiny, modern version of > that and it still hurts. > > Where this all lands for me is: if Unix historically survived because we > kept capturing practice in repeatable form (make, shell, cron, rc scripts, > etc.), maybe the next logical step is to expose the machine itself in those > terms. > > What I’ve been sketching out with a friend is a shell where the fundamental > objects aren’t strings or ad-hoc JSON, but live views of kernel structures. > > For example, every process becomes a Task object that conceptually maps to > task_struct: > > t = kernel.tasks.find(pid=1234) > t.pid -> 1234 > t.comm -> "sshd" > t.state -> "TASK_RUNNING" > t.parent.pid -> 1 > t.children -> [ ... ] > > t.kill(SIGTERM) > t.set_prio(80) > > Under the hood it’s not magic — it’s just reading /proc/1234/*, assembling > a stable “Task” view from the pieces the kernel already exports, and giving > you safe verbs that wrap the normal syscalls (kill(2), cgroup moves, etc.). > > Same idea for network interfaces (struct net_device), mounts / superblocks > (struct super_block), open sockets (struct sock), etc. You’d get objects > like NetIf, Mount, Socket, each with fields and sensible methods: > > iface = kernel.net.ifaces["eth0"] > iface.mtu -> 1500 > iface.rx_bytes -> 123456789 > iface.addrs_v4 -> ["192.0.2.10/24"] > > iface.up() > iface.set_mtu(9000) > iface.add_addr("192.0.2.99/24") > > The goal here is not to invent some shiny abstraction layer — it’s almost > the opposite. It’s to acknowledge, honestly, “this is how the kernel > already thinks about the world,” and then hand that to the operator as a > first-class, queryable vocabulary. > > Why I think this lines up with both of your notes: > > • Clem’s control point — this becomes the control surface. You keep the > cathedral (Linus, subsystem maintainers, etc.), but you finally give the > folks in production a coherent, inspectable, scriptable view of that > cathedral’s state instead of twenty tiny tools with incompatible flags. > > • Marc’s memory point — this becomes institutional memory. Instead of “what > was that five-stage awk pipeline Karl wrote in ’97 to find stuck tasks?”, > you ask the system: > > kernel.tasks > .where('$.state == "TASK_UNINTERRUPTIBLE" && $.waiting_on == > "io_schedule"') > .group('$.cgroup') > > and you get structured results you can act on. That knowledge survives > handoff. > > The other (slightly mind-blowing) side effect is that the same interface > could be pointed at a crash dump or a snapshot, so postmortem triage could > look exactly like live triage. > > I’d love to hear if this resonates with your lived experience, or if you’d > say “nice dream, kid, but here’s why it falls apart in the real world.” > > Because to me it feels like the same thread you both pulled on: we’ve > always been trying to capture practice so we can hand it to the next person > without losing our minds. > > > Phil > > On Sat, Nov 1, 2025, 12:34 PM Clem Cole wrote: > >> Marc, I agree. Like you, I think Phillips' observations resonate, but you >> nailed it with the drive for higher-level abstractions/being able to do >> more as better automation of a lower-level idea or facility. >> >> >> On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS >> wrote: >> >>> A lot of what you say is appealing and resonates with me. >>> >>> Let me offer another dimension to help think about the evolution of CLI >>> goodies: automation. >>> >>> In the early days of the Arpanet we got a wonderful ability - transfer a >>> file from one computer to another without writing a tape and making a trip >>> to the post office. FTP was wonderful. With administrative coherence we >>> also got smoother integration with tools like rcp. >>> >>> Along with these things came rapid growth in the number of machines in a >>> domain and the need to manage them coherently. Now we needed to transfer >>> a >>> bunch of files to a bunch of different machines. Enter rdist. (We will >>> leave the security challenges to the side.). Suddenly we could establish >>> a >>> common system image for a large number of machines. >>> >>> We then discovered that not all machines should be absolutely identical, >>> so >>> we layered all sorts of machinery on top of rdist and its multifarious >>> descendants so that we could keep various subtrees coherent. >>> >>> What we ended up with is a growing set of layered abstractions. At the >>> bottom were some fairly simple pieces of machinery that did this or that >>> on >>> the bare OS. Next were a collection of abstractions that automated the >>> orchestration of these underlying bits. Some of these abstractions turned >>> out to be seminal innovations in and of themselves and were then used in >>> developing yet another tier of abstractions and automations on top of the >>> second tier. >>> >>> As time passed we layered more and more abstractions. >>> >>> Of course, from time to time we also looked at the chaotic pile of >>> abstractions and attempted to streamline and simplify them, with varying >>> levels of success. >>> >>> Best, >>> >>> Marc >>> ===== >>> mindthegapdialogs.com >>> north-fork.info >>> >>> >>> On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS >>> wrote: >>> >>>> i'm a bit reluctant to post this here lest you rip it apart, but i'm >>> guess >>>> i'm ok with that if it happens. i'm more interested in learning the >>> truth >>>> than i am in being right. >>>> >>>> The Evolution of the Command Line: From Terseness to Expression >>>> >>>> 1. The Classical Unix Model (1970s–80s) >>>> >>>> cmd -flags arguments >>>> >>>> The early Unix commands embodied the ideal of “do one thing well.” Each >>>> flag was terse and mnemonic (-l, -r), and each utility was atomic. The >>>> shell provided composition through pipes. Commands like grep, cut, and >>> sort >>>> combined to perform a series of operations on the same data stream. >>>> >>>> 2. The GNU Era (late 80s–90s) >>>> >>>> cmd --long-simple --long-key=value [arguments] >>>> >>>> The GNU Project introduced long options to help people remember what the >>>> terse flags meant. Common options like --help and --version made tools >>>> self-describing. >>>> >>>> Strengths: clarity, accessibility, scriptability >>>> Weaknesses: creeping featurism >>>> >>>> 3. The “Swiss Army Knife” Model (1990s–2000s) >>>> >>>> Next was consolidation. Developers shipped a single binary with multiple >>>> subcommands: >>>> >>>> command subcommand [options] [arguments] >>>> >>>> Example: openssl x509 -in cert.pem -noout -text >>>> >>>> Each subcommand occupied its own domain, effectively creating >>> namespaces. >>>> This structure defined tools like git, svn, and openssl. >>>> >>>> Strengths: unified packaging, logical grouping >>>> Weaknesses: internal inconsistency; subcommands evolved unevenly. >>>> >>>> 4. The Verb-Oriented CLI (2000s–Present) >>>> >>>> As CLIs matured, their design grew more linguistic. Tools like Docker, >>> Git, >>>> and Kubernetes introduced verb-oriented hierarchies: >>>> >>>> tool verb [object] [flags] >>>> >>>> Example: docker run -it ubuntu bash >>>> >>>> This mapped naturally to the mental model of performing actions: >>> “Docker, >>>> run this container.” “Git, commit this change.” Frameworks like Go’s >>> Cobra >>>> or Python’s Click standardized the pattern. >>>> >>>> Strengths: extensible, discoverable, self-documenting >>>> Weaknesses: verbosity and conceptual overhead. A CLI became an >>> ecosystem. >>>> 5. The Sententious Model >>>> >>>> When a domain grows too complex for neat hierarchies, a single command >>>> becomes a compact expression of a workflow. Consider zfs, an elegant >>>> example of declarative-imperative blending: >>>> >>>> zfs create -o compression=lz4 tank/data >>>> It reads almost like a sentence: >>>> >>>> “Create a new dataset called tank/data with compression enabled using >>> LZ4.” >>>> Each option plays a grammatical role: >>>> >>>> create — the verb >>>> -o compression=lz4 — a property or adverbial modifier >>>> tank/data — the object being acted upon >>>> >>>> One fluent expression defines what and how. The syntax is a kind of >>>> expressive and efficient shell-native DSL. >>>> >>>> This phase of CLI design is baroque: not minimalist, not verbose, but >>>> literary in its compression of meaning. >>>> >>>> 6. The Configuration-Driven CLI (Modern Era) >>>> >>>> Example: kubectl apply -f deployment.yaml >>>> >>>> Today’s tools often speak in declarative terms. Rather than specify >>> every >>>> step, you provide a desired state in a file, and the CLI enacts it. >>>> >>>> Strengths: scales elegantly in automation, integrates with APIs >>>> Weaknesses: less immediacy; the human feedback loop grows distant. >>>> >>>> Across half a century of design, the command line has evolved from terse >>>> incantations to expressive languages of intent. >>>> From tuhs at tuhs.org Sun Nov 2 03:42:35 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 13:42:35 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: <202511011729.5A1HTmRR026088@freefriends.org> References: <202511011729.5A1HTmRR026088@freefriends.org> Message-ID: Arnold, First, thank you. I really appreciate you taking the time to answer directly. I think you’ve put your finger on the most important tension: who is this actually for? I don’t think this shell would be aimed at “most developers,” and I agree with you that most developers neither want nor need to think in terms of kernel anatomy. Where I’m trying to land is much narrower: the people who are responsible for keeping a running system healthy at 2am, and for explaining after the fact what went wrong. Call it SRE / ops / old-school Unix admin / performance engineer. That’s the user I have in mind. Today those folks already are working at that level — they’re just doing it through a pile of ad hoc ps, awk, /proc/$pid/*, lsof, netstat/ss, ifconfig/ip, etc. My argument is basically: can we give them one coherent vocabulary instead of twenty partial ones, and let them script against it in a way that’s readable the next day? On the portability point: yes, guilty as charged. The examples I gave are Linux-flavored (task_struct, /proc, cgroups, netlink). The idea I’ve been kicking around is that Task, NetIf, Mount, etc., aren’t literally the kernel structs, but stable “operator-facing” objects. On Linux they’d be populated from /proc, /sys, cgroups, netlink, and so on. On a BSD, that same Task object would have to be populated differently (kvm, sysctl, whatever that platform exposes). Same interface at the shell level, different provider under the hood. In other words: not “the Linux kernel is the universal truth,” but “each OS gets to define how to surface its own truth through the same high-level nouns.” If that sounds naive, I’m happy to hear “here’s where that abstraction leaks immediately.” I’m trying to figure out whether the conceptual layer is sound enough to be worth pursuing, even if the first usable implementation is Linux-only because that’s where I live. Thanks again for the reality check. This is exactly the kind of feedback I was hoping for. Phil On Sat, Nov 1, 2025, 1:29 PM wrote: > "A. P. Garcia via TUHS" wrote: > > > What I’ve been sketching out with a friend is a shell where the > fundamental > > objects aren’t strings or ad-hoc JSON, but live views of kernel > structures. > > > > [...] > > > > I’d love to hear if this resonates with your lived experience, or if > you’d > > say “nice dream, kid, but here’s why it falls apart in the real world.” > > Nice dream, kid, but here's why it falls apart in the real world: > > Most developers don't need to know that level of non-abstraction or > care to work at that level. If you're building product, you hook > things together with shell (or other) scripts. > > What you're proposing may be dynamite for sysadmins trying to troubleshoot, > but that's a small percentage of the potential user base. > > Not to mention it's linux-only and won't port to *BSD, macOS or what-have- > you. > > Now if Linux sysadmins IS your target audience, go for it. > > My two cents, of course, > > Arnold > From tuhs at tuhs.org Sun Nov 2 04:34:23 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 14:34:23 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: References: <202511011729.5A1HTmRR026088@freefriends.org> Message-ID: Arnold, your point about portability keeps echoing in my head. I’ve been thinking that the right answer might be a plugin architecture instead of a hard-wired Linux core — a stable vocabulary of objects (Task, NetIf, Mount, etc.) with back-end modules that know how to populate those objects for each OS. That way Linux can pull from /proc and /sys, BSD from sysctl or kvm, macOS from libproc, and so on. The interface stays constant; the provider changes. I’m also leaning toward making those providers language-agnostic, maybe using WASM modules, so anyone could extend the system in whatever they’re comfortable writing. It feels like the right compromise between honesty to each platform and a shared operational vocabulary. I’d love to know if that approach strikes you as sensible or if there’s a historical pitfall I’m not seeing. On Sat, Nov 1, 2025, 1:42 PM A. P. Garcia wrote: > Arnold, > > First, thank you. I really appreciate you taking the time to answer > directly. > > I think you’ve put your finger on the most important tension: who is this > actually for? > > I don’t think this shell would be aimed at “most developers,” and I agree > with you that most developers neither want nor need to think in terms of > kernel anatomy. Where I’m trying to land is much narrower: the people who > are responsible for keeping a running system healthy at 2am, and for > explaining after the fact what went wrong. Call it SRE / ops / old-school > Unix admin / performance engineer. That’s the user I have in mind. > > Today those folks already are working at that level — they’re just doing > it through a pile of ad hoc ps, awk, /proc/$pid/*, lsof, netstat/ss, > ifconfig/ip, etc. My argument is basically: can we give them one coherent > vocabulary instead of twenty partial ones, and let them script against it > in a way that’s readable the next day? > > On the portability point: yes, guilty as charged. The examples I gave are > Linux-flavored (task_struct, /proc, cgroups, netlink). The idea I’ve been > kicking around is that Task, NetIf, Mount, etc., aren’t literally the > kernel structs, but stable “operator-facing” objects. On Linux they’d be > populated from /proc, /sys, cgroups, netlink, and so on. On a BSD, that > same Task object would have to be populated differently (kvm, sysctl, > whatever that platform exposes). Same interface at the shell level, > different provider under the hood. > > In other words: not “the Linux kernel is the universal truth,” but “each > OS gets to define how to surface its own truth through the same high-level > nouns.” > > If that sounds naive, I’m happy to hear “here’s where that abstraction > leaks immediately.” I’m trying to figure out whether the conceptual layer > is sound enough to be worth pursuing, even if the first usable > implementation is Linux-only because that’s where I live. > > Thanks again for the reality check. This is exactly the kind of feedback I > was hoping for. > > > Phil > > On Sat, Nov 1, 2025, 1:29 PM wrote: > >> "A. P. Garcia via TUHS" wrote: >> >> > What I’ve been sketching out with a friend is a shell where the >> fundamental >> > objects aren’t strings or ad-hoc JSON, but live views of kernel >> structures. >> > >> > [...] >> > >> > I’d love to hear if this resonates with your lived experience, or if >> you’d >> > say “nice dream, kid, but here’s why it falls apart in the real world.” >> >> Nice dream, kid, but here's why it falls apart in the real world: >> >> Most developers don't need to know that level of non-abstraction or >> care to work at that level. If you're building product, you hook >> things together with shell (or other) scripts. >> >> What you're proposing may be dynamite for sysadmins trying to >> troubleshoot, >> but that's a small percentage of the potential user base. >> >> Not to mention it's linux-only and won't port to *BSD, macOS or what-have- >> you. >> >> Now if Linux sysadmins IS your target audience, go for it. >> >> My two cents, of course, >> >> Arnold >> > From tuhs at tuhs.org Sun Nov 2 04:39:09 2025 From: tuhs at tuhs.org (Douglas McIlroy via TUHS) Date: Sat, 1 Nov 2025 14:39:09 -0400 Subject: [TUHS] On Graduation from (VI) to (I) In-Reply-To: References: Message-ID: With hindsight, I see the Section 1/Section 6 dichotomy as being generally aligned with the idea of software tools vs other programs, a term that was not yet in the air at the time of v1. I do not recall any particular stigma or honor attached to the distinction. A couple of notable exceptions are sort and crypt. I suspect Ken put the original sort in Section 6 because he deemed it unfinished. The man page said "wide options are available". When I asked Ken what they were, he told me one can add them to the source. This may well correspond to Matt's model of juvenile programs in 6 growing to adulthood in 1. Yacc is another such example. A move in the opposite direction was Fred Grampp's demotion of crypt to the status of toy after it was shown to be easily broken. In fact, Bob Morris had created it as a cryptographic challenge: could people break a basic rotor machine? (Crypt had 1 rotor; Enigma had 4.) Another Section 6 program that may fit the designation of "juvenile" is the disassembler das, present only in v1 and v2. I have no recollection of the program, and know of its existence only because it's on the list of all man pages; I My text-to-speech program, speak, began in 1, then moved to 6, and eventually disappeared from distributions because it depended on hardware that few Unix installations had. We used it mostly for fun, hence 6, but it might be called a software tool, because it could be used to add speech capability to any program. And it was central to the work of blind programmers elsewhere who obtained it. The change in designation of section 6 from "user maintained" to "games" smoked some compilers or interpreters for little-used languages (APL, Basic, TMG) out of Section 6, where I believe they had been placed because there was no felt need to keep them alive as the system evolved. Factor, which certainly is not a software tool, reveals that the distinction between 1 and 6 was rather capricious. It began in 1, sojourned in 6, and came back to 1. I am not aware of any change in perception of its purpose or importance over time. cal, a program that I use practically daily, spent much of its life in Section 6. There's no way it can be regarded as a software tool, nor juvenile, nor "user-maintained".. It finally found a home in the rechristened Section 7, "Databases and language conventions", which was broadly construed to include information sources.The related category of astronomy programs got similar treatment. Many programs moved from 1 to 6 and back again, especially in v5 and v6. This cohort included several graphics programs, none of which became standard. Speaking as the author of speak and of the options in sort, as well as the editor of v7, I think that at the time of v1 I understood the distinction as Matt suggests, but later saw it as a measure of how closely the programs aligned with the central mission of fostering the creation of software and the general utility of computers. Doug On Thu, Oct 30, 2025 at 9:23 PM segaloco via TUHS wrote: > > Present from the beginning, Section VI of the UNIX Programmer's > Manual was the gathering place for all those little programs > folks involved in the system had authored for fun and frolic > rather than work and business, mostly. In-progress works and > experimental features also often found themselves relegated to > this section. If a feature was lucky (and not too "fun"), it > earned the distinction of graduating then to the big leagues in > Section I. > > Something I'm curious about is what sorts of decisions were > involved in what section between the two to slot any given > program at any given time. Of course the arbiters of the > original manuals would've been folks at Bell/AT&T, but we also > see this convention retained in other vendor's offerings, with > them also relegating certain additions and components to this oft > overlooked section. > > For me I'm also curious if there was a sense of pride, or on the > flip side, a sense of selling out when/if one's program ascended > the marble steps from section VI to section I. On one hand, I > would feel proud that my work was appreciated enough to make it. > On the other hand, I am a very diy person and would feel > similarly proud of how much volume I could shove into section VI > without concerning myself with the haughty expectations of those > snooty section I programs. > > Anyone have any fun stories related to this dichotomy in the > manual? Have your feelings ever been hurt because what you > thought was section I work was banished to section VI? Was it > less of a big deal than my dramatic delivery would suggest? > > - Matt G. From tuhs at tuhs.org Sun Nov 2 04:49:41 2025 From: tuhs at tuhs.org (Arnold Robbins via TUHS) Date: Sat, 01 Nov 2025 12:49:41 -0600 Subject: [TUHS] evolution of the cli In-Reply-To: References: <202511011729.5A1HTmRR026088@freefriends.org> Message-ID: <202511011849.5A1Infej030801@freefriends.org> Hi. "A. P. Garcia" wrote: > Arnold, > > First, thank you. I really appreciate you taking the time to answer > directly. > > I think you’ve put your finger on the most important tension: who is this > actually for? > > I don’t think this shell would be aimed at “most developers,” and I agree > with you that most developers neither want nor need to think in terms of > kernel anatomy. Where I’m trying to land is much narrower: the people who > are responsible for keeping a running system healthy at 2am, and for > explaining after the fact what went wrong. Call it SRE / ops / old-school > Unix admin / performance engineer. That’s the user I have in mind. OK, so I'll buy that your approach makes some sense for that crowd. > Arnold, your point about portability keeps echoing in my head. > I’ve been thinking that the right answer might be a plugin architecture > instead of a hard-wired Linux core — a stable vocabulary of objects (Task, > NetIf, Mount, etc.) with back-end modules that know how to populate those > objects for each OS. > > That way Linux can pull from /proc and /sys, BSD from sysctl or kvm, macOS > from libproc, and so on. The interface stays constant; the provider changes. I like that idea. > I’m also leaning toward making those providers language-agnostic, maybe > using WASM modules, so anyone could extend the system in whatever they’re > comfortable writing. I know nothing about WASM, so I can't really comment. What's echoing in my head at this point is that something like Plan 9's treatment of devices with /dev/xxx/ctl and /dev/xxx/data files would be better - if you could expose a file system like interface to the kernel data structures. Maybe via FUSE? Of course, your query language was sort of SQL-like, and as an old Unix hand I prefer shell scripts, so now my bias is showing. Interesting ideas, though. Arnold From tuhs at tuhs.org Sun Nov 2 05:21:50 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sat, 1 Nov 2025 15:21:50 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: <202511011849.5A1Infej030801@freefriends.org> References: <202511011729.5A1HTmRR026088@freefriends.org> <202511011849.5A1Infej030801@freefriends.org> Message-ID: Arnold, This is exactly the kind of reply I was hoping for. Thank you. The Plan 9 point is really interesting. Exposing kernel/OS state as a mounted virtual filesystem (data + ctl) is honestly a cleaner expression of the same instinct I’ve been circling: make the machine describe itself in a way operators can both read and act on without having to reverse-engineer twenty different tools. If I squint, I can almost see two layers that could coexist: • Layer 1 (your suggestion): a FUSE-style mounted tree that surfaces live system objects as directories and plain-text files — /ops/tasks/1234/…, /ops/netif/eth0/…, etc. Read a file to inspect, echo ... > ctl to act. Any shell, any awk script, no new language required. That’s extremely attractive, and it lines up with your “just let me script it” bias. • Layer 2 (the thing I was sketching): an operator’s console / REPL that sits on top of that same tree and gives you richer views, grouping, filtering, summaries, postmortem queries, JSON output, etc. More of an investigation surface for the 2am incident or the day-after review. In other words, your Plan 9 model could actually be the substrate. My higher-level “Task / NetIf / Mount objects with methods” could just be a friendlier lens on that substrate — not a replacement for it. That also answers your portability worry in a very old-school way: each OS could ship its own provider for that filesystem view, however it gathers the data (Linux via /proc, BSD via sysctl/kvm, etc.), and anything above it just consumes files. I really appreciate you pointing me in that direction. It pulls this back toward something a real Unix person would actually tolerate. Phil On Sat, Nov 1, 2025, 2:49 PM wrote: > Hi. > > "A. P. Garcia" wrote: > > > Arnold, > > > > First, thank you. I really appreciate you taking the time to answer > > directly. > > > > I think you’ve put your finger on the most important tension: who is this > > actually for? > > > > I don’t think this shell would be aimed at “most developers,” and I agree > > with you that most developers neither want nor need to think in terms of > > kernel anatomy. Where I’m trying to land is much narrower: the people who > > are responsible for keeping a running system healthy at 2am, and for > > explaining after the fact what went wrong. Call it SRE / ops / old-school > > Unix admin / performance engineer. That’s the user I have in mind. > > OK, so I'll buy that your approach makes some sense for that crowd. > > > Arnold, your point about portability keeps echoing in my head. > > I’ve been thinking that the right answer might be a plugin architecture > > instead of a hard-wired Linux core — a stable vocabulary of objects > (Task, > > NetIf, Mount, etc.) with back-end modules that know how to populate those > > objects for each OS. > > > > That way Linux can pull from /proc and /sys, BSD from sysctl or kvm, > macOS > > from libproc, and so on. The interface stays constant; the provider > changes. > > I like that idea. > > > I’m also leaning toward making those providers language-agnostic, maybe > > using WASM modules, so anyone could extend the system in whatever they’re > > comfortable writing. > > I know nothing about WASM, so I can't really comment. > > What's echoing in my head at this point is that something like > Plan 9's treatment of devices with /dev/xxx/ctl and /dev/xxx/data > files would be better - if you could expose a file system like > interface to the kernel data structures. Maybe via FUSE? > > Of course, your query language was sort of SQL-like, and as an > old Unix hand I prefer shell scripts, so now my bias is showing. > > Interesting ideas, though. > > Arnold > From tuhs at tuhs.org Sun Nov 2 05:25:05 2025 From: tuhs at tuhs.org (Marc Donner via TUHS) Date: Sat, 1 Nov 2025 15:25:05 -0400 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: Well, the samples you exhibit are fairly verbose and rely on keys that are either difficult to touch type (most people who can touch type or pseudo touch type are primarily facile with the home key row and the row immediately above and below. Most touch typists can not touch type numbers and almost none can touch type the shifted top row keys (!@# ... _+). Worse yet, too many keyboard makers move the keys around depending on magic that no one understands. So verbose and hard to type stuff will be hard to persuade people to use on the command line. In a program, yes, but on the keyboard, I'm skeptical. And for more complex things, we have Python and other programming / scripting languages that are more than adequate. Best, Marc ===== mindthegapdialogs.com north-fork.info On Sat, Nov 1, 2025 at 1:13 PM A. P. Garcia wrote: > Clem, Marc, this is incredibly helpful context, thank you. > > Clem, your “Linux is still a cathedral, just with different master > builders” hit me hard, because it immediately reframes this not as > mythology but governance and economics. Who gets to steer. Who keeps it > cheap enough to win. > > Marc, your point about make as externalized memory lit up the other half > of my brain. The idea that half of sysadmin life was just not forgetting > the exact incantation from yesterday. yes. I’ve lived a tiny, modern > version of that and it still hurts. > > Where this all lands for me is: if Unix historically survived because we > kept capturing practice in repeatable form (make, shell, cron, rc scripts, > etc.), maybe the next logical step is to expose the machine itself in those > terms. > > What I’ve been sketching out with a friend is a shell where the > fundamental objects aren’t strings or ad-hoc JSON, but live views of kernel > structures. > > For example, every process becomes a Task object that conceptually maps to > task_struct: > > t = kernel.tasks.find(pid=1234) > t.pid -> 1234 > t.comm -> "sshd" > t.state -> "TASK_RUNNING" > t.parent.pid -> 1 > t.children -> [ ... ] > > t.kill(SIGTERM) > t.set_prio(80) > > Under the hood it’s not magic — it’s just reading /proc/1234/*, assembling > a stable “Task” view from the pieces the kernel already exports, and giving > you safe verbs that wrap the normal syscalls (kill(2), cgroup moves, etc.). > > Same idea for network interfaces (struct net_device), mounts / superblocks > (struct super_block), open sockets (struct sock), etc. You’d get objects > like NetIf, Mount, Socket, each with fields and sensible methods: > > iface = kernel.net.ifaces["eth0"] > iface.mtu -> 1500 > iface.rx_bytes -> 123456789 > iface.addrs_v4 -> ["192.0.2.10/24"] > > iface.up() > iface.set_mtu(9000) > iface.add_addr("192.0.2.99/24") > > The goal here is not to invent some shiny abstraction layer — it’s almost > the opposite. It’s to acknowledge, honestly, “this is how the kernel > already thinks about the world,” and then hand that to the operator as a > first-class, queryable vocabulary. > > Why I think this lines up with both of your notes: > > • Clem’s control point — this becomes the control surface. You keep the > cathedral (Linus, subsystem maintainers, etc.), but you finally give the > folks in production a coherent, inspectable, scriptable view of that > cathedral’s state instead of twenty tiny tools with incompatible flags. > > • Marc’s memory point — this becomes institutional memory. Instead of > “what was that five-stage awk pipeline Karl wrote in ’97 to find stuck > tasks?”, you ask the system: > > kernel.tasks > .where('$.state == "TASK_UNINTERRUPTIBLE" && $.waiting_on == > "io_schedule"') > .group('$.cgroup') > > and you get structured results you can act on. That knowledge survives > handoff. > > The other (slightly mind-blowing) side effect is that the same interface > could be pointed at a crash dump or a snapshot, so postmortem triage could > look exactly like live triage. > > I’d love to hear if this resonates with your lived experience, or if you’d > say “nice dream, kid, but here’s why it falls apart in the real world.” > > Because to me it feels like the same thread you both pulled on: we’ve > always been trying to capture practice so we can hand it to the next person > without losing our minds. > > > Phil > > On Sat, Nov 1, 2025, 12:34 PM Clem Cole wrote: > >> Marc, I agree. Like you, I think Phillips' observations resonate, but >> you nailed it with the drive for higher-level abstractions/being able to do >> more as better automation of a lower-level idea or facility. >> >> >> On Sat, Nov 1, 2025 at 11:57 AM Marc Donner via TUHS >> wrote: >> >>> A lot of what you say is appealing and resonates with me. >>> >>> Let me offer another dimension to help think about the evolution of CLI >>> goodies: automation. >>> >>> In the early days of the Arpanet we got a wonderful ability - transfer a >>> file from one computer to another without writing a tape and making a >>> trip >>> to the post office. FTP was wonderful. With administrative coherence we >>> also got smoother integration with tools like rcp. >>> >>> Along with these things came rapid growth in the number of machines in a >>> domain and the need to manage them coherently. Now we needed to >>> transfer a >>> bunch of files to a bunch of different machines. Enter rdist. (We will >>> leave the security challenges to the side.). Suddenly we could >>> establish a >>> common system image for a large number of machines. >>> >>> We then discovered that not all machines should be absolutely identical, >>> so >>> we layered all sorts of machinery on top of rdist and its multifarious >>> descendants so that we could keep various subtrees coherent. >>> >>> What we ended up with is a growing set of layered abstractions. At the >>> bottom were some fairly simple pieces of machinery that did this or that >>> on >>> the bare OS. Next were a collection of abstractions that automated the >>> orchestration of these underlying bits. Some of these abstractions >>> turned >>> out to be seminal innovations in and of themselves and were then used in >>> developing yet another tier of abstractions and automations on top of the >>> second tier. >>> >>> As time passed we layered more and more abstractions. >>> >>> Of course, from time to time we also looked at the chaotic pile of >>> abstractions and attempted to streamline and simplify them, with varying >>> levels of success. >>> >>> Best, >>> >>> Marc >>> ===== >>> mindthegapdialogs.com >>> north-fork.info >>> >>> >>> On Sat, Nov 1, 2025 at 10:59 AM A. P. Garcia via TUHS >>> wrote: >>> >>> > i'm a bit reluctant to post this here lest you rip it apart, but i'm >>> guess >>> > i'm ok with that if it happens. i'm more interested in learning the >>> truth >>> > than i am in being right. >>> > >>> > The Evolution of the Command Line: From Terseness to Expression >>> > >>> > 1. The Classical Unix Model (1970s–80s) >>> > >>> > cmd -flags arguments >>> > >>> > The early Unix commands embodied the ideal of “do one thing well.” Each >>> > flag was terse and mnemonic (-l, -r), and each utility was atomic. The >>> > shell provided composition through pipes. Commands like grep, cut, and >>> sort >>> > combined to perform a series of operations on the same data stream. >>> > >>> > 2. The GNU Era (late 80s–90s) >>> > >>> > cmd --long-simple --long-key=value [arguments] >>> > >>> > The GNU Project introduced long options to help people remember what >>> the >>> > terse flags meant. Common options like --help and --version made tools >>> > self-describing. >>> > >>> > Strengths: clarity, accessibility, scriptability >>> > Weaknesses: creeping featurism >>> > >>> > 3. The “Swiss Army Knife” Model (1990s–2000s) >>> > >>> > Next was consolidation. Developers shipped a single binary with >>> multiple >>> > subcommands: >>> > >>> > command subcommand [options] [arguments] >>> > >>> > Example: openssl x509 -in cert.pem -noout -text >>> > >>> > Each subcommand occupied its own domain, effectively creating >>> namespaces. >>> > This structure defined tools like git, svn, and openssl. >>> > >>> > Strengths: unified packaging, logical grouping >>> > Weaknesses: internal inconsistency; subcommands evolved unevenly. >>> > >>> > 4. The Verb-Oriented CLI (2000s–Present) >>> > >>> > As CLIs matured, their design grew more linguistic. Tools like Docker, >>> Git, >>> > and Kubernetes introduced verb-oriented hierarchies: >>> > >>> > tool verb [object] [flags] >>> > >>> > Example: docker run -it ubuntu bash >>> > >>> > This mapped naturally to the mental model of performing actions: >>> “Docker, >>> > run this container.” “Git, commit this change.” Frameworks like Go’s >>> Cobra >>> > or Python’s Click standardized the pattern. >>> > >>> > Strengths: extensible, discoverable, self-documenting >>> > Weaknesses: verbosity and conceptual overhead. A CLI became an >>> ecosystem. >>> > >>> > 5. The Sententious Model >>> > >>> > When a domain grows too complex for neat hierarchies, a single command >>> > becomes a compact expression of a workflow. Consider zfs, an elegant >>> > example of declarative-imperative blending: >>> > >>> > zfs create -o compression=lz4 tank/data >>> > It reads almost like a sentence: >>> > >>> > “Create a new dataset called tank/data with compression enabled using >>> LZ4.” >>> > >>> > Each option plays a grammatical role: >>> > >>> > create — the verb >>> > -o compression=lz4 — a property or adverbial modifier >>> > tank/data — the object being acted upon >>> > >>> > One fluent expression defines what and how. The syntax is a kind of >>> > expressive and efficient shell-native DSL. >>> > >>> > This phase of CLI design is baroque: not minimalist, not verbose, but >>> > literary in its compression of meaning. >>> > >>> > 6. The Configuration-Driven CLI (Modern Era) >>> > >>> > Example: kubectl apply -f deployment.yaml >>> > >>> > Today’s tools often speak in declarative terms. Rather than specify >>> every >>> > step, you provide a desired state in a file, and the CLI enacts it. >>> > >>> > Strengths: scales elegantly in automation, integrates with APIs >>> > Weaknesses: less immediacy; the human feedback loop grows distant. >>> > >>> > Across half a century of design, the command line has evolved from >>> terse >>> > incantations to expressive languages of intent. >>> > >>> >> From tuhs at tuhs.org Sun Nov 2 05:54:09 2025 From: tuhs at tuhs.org (Arnold Robbins via TUHS) Date: Sat, 01 Nov 2025 13:54:09 -0600 Subject: [TUHS] evolution of the cli In-Reply-To: References: <202511011729.5A1HTmRR026088@freefriends.org> <202511011849.5A1Infej030801@freefriends.org> Message-ID: <202511011954.5A1Js9Eo034664@freefriends.org> Glad to help. You can send me some shares in your start-up company to repay me. :-) Seriously, I am glad to help. It sounds like you could be on to something worthwhile. Arnold "A. P. Garcia" wrote: > Arnold, > > This is exactly the kind of reply I was hoping for. Thank you. > > The Plan 9 point is really interesting. Exposing kernel/OS state as a > mounted virtual filesystem (data + ctl) is honestly a cleaner expression of > the same instinct I’ve been circling: make the machine describe itself in a > way operators can both read and act on without having to reverse-engineer > twenty different tools. > > If I squint, I can almost see two layers that could coexist: > > • Layer 1 (your suggestion): a FUSE-style mounted tree that surfaces live > system objects as directories and plain-text files — /ops/tasks/1234/…, > /ops/netif/eth0/…, etc. Read a file to inspect, echo ... > ctl to act. Any > shell, any awk script, no new language required. That’s extremely > attractive, and it lines up with your “just let me script it” bias. > > • Layer 2 (the thing I was sketching): an operator’s console / REPL that > sits on top of that same tree and gives you richer views, grouping, > filtering, summaries, postmortem queries, JSON output, etc. More of an > investigation surface for the 2am incident or the day-after review. > > In other words, your Plan 9 model could actually be the substrate. My > higher-level “Task / NetIf / Mount objects with methods” could just be a > friendlier lens on that substrate — not a replacement for it. > > That also answers your portability worry in a very old-school way: each OS > could ship its own provider for that filesystem view, however it gathers > the data (Linux via /proc, BSD via sysctl/kvm, etc.), and anything above it > just consumes files. > > I really appreciate you pointing me in that direction. It pulls this back > toward something a real Unix person would actually tolerate. > > > Phil > > > On Sat, Nov 1, 2025, 2:49 PM wrote: > > > Hi. > > > > "A. P. Garcia" wrote: > > > > > Arnold, > > > > > > First, thank you. I really appreciate you taking the time to answer > > > directly. > > > > > > I think you’ve put your finger on the most important tension: who is this > > > actually for? > > > > > > I don’t think this shell would be aimed at “most developers,” and I agree > > > with you that most developers neither want nor need to think in terms of > > > kernel anatomy. Where I’m trying to land is much narrower: the people who > > > are responsible for keeping a running system healthy at 2am, and for > > > explaining after the fact what went wrong. Call it SRE / ops / old-school > > > Unix admin / performance engineer. That’s the user I have in mind. > > > > OK, so I'll buy that your approach makes some sense for that crowd. > > > > > Arnold, your point about portability keeps echoing in my head. > > > I’ve been thinking that the right answer might be a plugin architecture > > > instead of a hard-wired Linux core — a stable vocabulary of objects > > (Task, > > > NetIf, Mount, etc.) with back-end modules that know how to populate those > > > objects for each OS. > > > > > > That way Linux can pull from /proc and /sys, BSD from sysctl or kvm, > > macOS > > > from libproc, and so on. The interface stays constant; the provider > > changes. > > > > I like that idea. > > > > > I’m also leaning toward making those providers language-agnostic, maybe > > > using WASM modules, so anyone could extend the system in whatever they’re > > > comfortable writing. > > > > I know nothing about WASM, so I can't really comment. > > > > What's echoing in my head at this point is that something like > > Plan 9's treatment of devices with /dev/xxx/ctl and /dev/xxx/data > > files would be better - if you could expose a file system like > > interface to the kernel data structures. Maybe via FUSE? > > > > Of course, your query language was sort of SQL-like, and as an > > old Unix hand I prefer shell scripts, so now my bias is showing. > > > > Interesting ideas, though. > > > > Arnold > > From tuhs at tuhs.org Sun Nov 2 12:13:22 2025 From: tuhs at tuhs.org (Theodore Tso via TUHS) Date: Sat, 1 Nov 2025 22:13:22 -0400 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <196f66ae-5f43-44b4-bd61-d661630a0970@oracle.com> Message-ID: <20251102021322.GA77791@macsyma.lan> On Sat, Nov 01, 2025 at 12:45:14PM -0400, Clem Cole via TUHS wrote: > Hrrmph. IMO: This is trying to fit the data over the graph you want. > > I never agreed with ESR's model. Linux was (and continues to be) a > Cathedral. It just has different master builders than BSD, SunOS, SVR4, > VMS, and NT did. Speaking as someone who was working on Linux from the beginning (I was the first North American Linux kernel hacker, and the first US FTP redistribution site for Linux was my desktop workstation at MIT (a Vaxstation 3100 M38 running Ultrix), I never agreed with ESR's model either, and I agree that Mr. Garcia's graph is also... not reflective of reality. First of all, ESR never spoke for the Linux community; certainly not all of it. And the whole "with enough eyeballs, all bugs are shallow" might be true for some bugs, but it is *most* definitely not true for all; especially in the kernel or multi-threaded programs. More generally, ESR's thesis was only really applicable or interesting for projects in a relatively narrow band of complexity. Projects which are too small/simple aren't inteersting enough to have sufficient gravitational pull to create a viable development community. And that's why there are a huge number of abandoned trivial projects in github or sourceforge. The plausibility of ESR's Cathedral and Bazaar essay relies on survivorship bias. The Cathedral model also falls apart for projects that grow beyond that a certain complexity, when it requires a non-trivial of engineers working full-time, or at least a significant percentage of their time on the prjoect. At that point, it's not just about a devloper wanting to scratch their personal itch, but what their employer is willing to fund --- since unless the developer is independently wealthy, most developers do prefer to have food with their meals. At that point, what gets attention is very strongly affected by what one or more companies have a business case to fund. An especially talented project leader with Product Management team, can sometimes stich together develop programs where different engineers working for different companies can work together on different features that benefit all of their employers. But not everyone has that particular combination of technical, social, and business skills. Most people who try to create their models how "the open source community" tend to forget both needs of the companies who fund projects, and passion and commitments of the engineers in those projects, many of whome sacrifice a certain amount of career or financial prospects because they care about the commuity of which they have become a part. Many of these engineers have worked for more than one company, and have known colleagues who have worked at multiple company. So these engineers tend to have a group loyalty which is not precisely mapped to their employer; but at the same time, they know that they need to add enough value to their employer so that (a) their company stays in business, and (b) the company is willing to continue to pay their salary. It's complicated(tm). On Sat, Nov 01, 2025 at 10:05:47AM -0700, Alan Coopersmith via TUHS wrote: > On 11/1/25 07:42, A. P. Garcia via TUHS wrote: > > Linux took the opposite path. Its ecosystem is messy, distributed, and > > loud, a bazaar where competing ideas coexist until one wins by survival, > > not decree. It doesn’t import technologies wholesale. It reinvents them > > from first principles. > > > > That’s why instead of adopting DTrace, Linux built eBPF, a programmable > > virtual machine for tracing, networking, and observability. It’s more > > complex, less elegant, but more adaptable. > > Except of course, Linux built eBPF on top of BPF, a technology imported > wholesale from BSD. The difference between how Linux looked at Dtrace & > BPF is one of license terms, not philosophy - they were willing to accept > BSD-licensed imports, but not CDDL-licensed ones. Absolutely. Linux is quite willing to take ideas and code from everywhere, so long as (a) it's good, and (b) the copyright license is compatible. For example, Read-Copy-Update (RCU) was a technique that was created and patented by Sequent, and after IBM purchased Sequent, IBM donated the patent to Linux, and had former Sequent engineers (working for IBM's Linux Technology Center) implement RCU for Linux. We'll take code and ideas from whereever we can get them. If Oracle hadn't outbid IBM to purchase Sun Microsystems (and some of us believe that some executives at Sun leaked details of the negotiation to the Wall Street Journal to draw competitors such as Oracle to bid on Sun; certainly IBM had no incentive to leak what came out in the press), it is very likely that I would have been on the teams sent to Sun, and we would have tried to relicense DTrace and ZFS from the CDDL to the GPL or some GPL-compatible license. It's interesting to consider what the alternate history might have been if we could have merged the best of the Solaris technology into Linux, and if we could have welcomed some of the Solaris team into the Linux community. I certainly had nothing but respect for them, and I always thought that they had been badly let down by their management and sales teams. They deserved better. My personal belief is that Oracle's acquisition of Sun Microsystems, while it may have represented a better deal for Sun's shareholders, was ultimately a tragedy for the industry as a whole. Cheers, - Ted From tuhs at tuhs.org Sun Nov 2 12:47:24 2025 From: tuhs at tuhs.org (Warner Losh via TUHS) Date: Sat, 1 Nov 2025 20:47:24 -0600 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <20251102021322.GA77791@macsyma.lan> References: <196f66ae-5f43-44b4-bd61-d661630a0970@oracle.com> <20251102021322.GA77791@macsyma.lan> Message-ID: On Sat, Nov 1, 2025, 8:13 PM Theodore Tso via TUHS wrote: > On Sat, Nov 01, 2025 at 12:45:14PM -0400, Clem Cole via TUHS wrote: > > Hrrmph. IMO: This is trying to fit the data over the graph you want. > > > > I never agreed with ESR's model. Linux was (and continues to be) a > > Cathedral. It just has different master builders than BSD, SunOS, SVR4, > > VMS, and NT did. > > Speaking as someone who was working on Linux from the beginning (I was > the first North American Linux kernel hacker, and the first US FTP > redistribution site for Linux was my desktop workstation at MIT (a > Vaxstation 3100 M38 running Ultrix), I never agreed with ESR's model > either, and I agree that Mr. Garcia's graph is also... not reflective > of reality. > > First of all, ESR never spoke for the Linux community; certainly not > all of it. And the whole "with enough eyeballs, all bugs are shallow" > might be true for some bugs, but it is *most* definitely not true for > all; especially in the kernel or multi-threaded programs. More > generally, ESR's thesis was only really applicable or interesting for > projects in a relatively narrow band of complexity. Projects which > are too small/simple aren't inteersting enough to have sufficient > gravitational pull to create a viable development community. And > that's why there are a huge number of abandoned trivial projects in > github or sourceforge. The plausibility of ESR's Cathedral and Bazaar > essay relies on survivorship bias. > > The Cathedral model also falls apart for projects that grow beyond > that a certain complexity, when it requires a non-trivial of engineers > working full-time, or at least a significant percentage of their time > on the prjoect. At that point, it's not just about a devloper wanting > to scratch their personal itch, but what their employer is willing to > fund --- since unless the developer is independently wealthy, most > developers do prefer to have food with their meals. > > At that point, what gets attention is very strongly affected by what > one or more companies have a business case to fund. An especially > talented project leader with Product Management team, can sometimes > stich together develop programs where different engineers working for > different companies can work together on different features that > benefit all of their employers. But not everyone has that particular > combination of technical, social, and business skills. > > Most people who try to create their models how "the open source > community" tend to forget both needs of the companies who fund > projects, and passion and commitments of the engineers in those > projects, many of whome sacrifice a certain amount of career or > financial prospects because they care about the commuity of which they > have become a part. Many of these engineers have worked for more than > one company, and have known colleagues who have worked at multiple > company. So these engineers tend to have a group loyalty which is not > precisely mapped to their employer; but at the same time, they know > that they need to add enough value to their employer so that (a) their > company stays in business, and (b) the company is willing to continue > to pay their salary. > > It's complicated(tm). > > On Sat, Nov 01, 2025 at 10:05:47AM -0700, Alan Coopersmith via TUHS wrote: > > On 11/1/25 07:42, A. P. Garcia via TUHS wrote: > > > Linux took the opposite path. Its ecosystem is messy, distributed, and > > > loud, a bazaar where competing ideas coexist until one wins by > survival, > > > not decree. It doesn’t import technologies wholesale. It reinvents them > > > from first principles. > > > > > > That’s why instead of adopting DTrace, Linux built eBPF, a programmable > > > virtual machine for tracing, networking, and observability. It’s more > > > complex, less elegant, but more adaptable. > > > > Except of course, Linux built eBPF on top of BPF, a technology imported > > wholesale from BSD. The difference between how Linux looked at Dtrace & > > BPF is one of license terms, not philosophy - they were willing to accept > > BSD-licensed imports, but not CDDL-licensed ones. > Also, in large part eBPF and BPF share only three letters of their name. Extending it meant completely redoing it in the end, with a lot of learning by fire as the exploits came out. But you can't deny that it's easier to use than dtrace and has gone well beyond Dtrace could easily be used for (it could in theory, but practicalities made it hard to be a firewall much less a portable one). Absolutely. Linux is quite willing to take ideas and code from > everywhere, so long as (a) it's good, and (b) the copyright license is > compatible. For example, Read-Copy-Update (RCU) was a technique that > was created and patented by Sequent, and after IBM purchased Sequent, > IBM donated the patent to Linux, and had former Sequent engineers > (working for IBM's Linux Technology Center) implement RCU for Linux. > > We'll take code and ideas from whereever we can get them. > Indeed. At times, though, part of what it takes to get good code into the tree has an element of politics about it. It's another aspect of open source ESR's model fails to capture. If Oracle hadn't outbid IBM to purchase Sun Microsystems (and some of > us believe that some executives at Sun leaked details of the > negotiation to the Wall Street Journal to draw competitors such as > Oracle to bid on Sun; certainly IBM had no incentive to leak what came > out in the press), it is very likely that I would have been on the > teams sent to Sun, and we would have tried to relicense DTrace and ZFS > from the CDDL to the GPL or some GPL-compatible license. > FreeBSD would have loved a BSD or MIT license for both of these. :). At one point an engineer/manager at Oracle announced at a conference that ZFS would be relicensed as GPL. He was fired a few days later. Oracle really doesn't want to relicense. It's interesting to consider what the alternate history might have > been if we could have merged the best of the Solaris technology into > Linux, and if we could have welcomed some of the Solaris team into the > Linux community. I certainly had nothing but respect for them, and I > always thought that they had been badly let down by their management > and sales teams. They deserved better. > Indeed. I tried to get some of them interested in working on FreeBSD after all that, but the damage was done... while they had jobs with Oracle, they were happy to make Solaris better and did a damn fine job at it. Once it fell apart, though, everyone was too burned out... My personal belief is that Oracle's acquisition of Sun Microsystems, > while it may have represented a better deal for Sun's shareholders, > was ultimately a tragedy for the industry as a whole. > Agreed. Sun had cool technology and moved a lot into open source. It was an interesting experiment that may have had a hand in their fall from profitability... though that whole path was rather complicated. Warner Cheers, > > - Ted > From tuhs at tuhs.org Sun Nov 2 13:41:17 2025 From: tuhs at tuhs.org (steve jenkin via TUHS) Date: Sun, 2 Nov 2025 14:41:17 +1100 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <20251102021322.GA77791@macsyma.lan> References: <20251102021322.GA77791@macsyma.lan> Message-ID: <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> Ted Tso is IMHO a definitive commentator for the Linux kernel. I’m not qualified to comment on the kernel or its development. ESR, RMS & the FSF haven’t addressed or publicly reimagined the Open Source model for the modern world. Money matters. In the long term, all, not just the best talent, needs to be paid decent wages. Most democracies don’t rely on Churches & Charities to provide all their Social services. It’s seen as a ‘Public Good’ to feed & house people when in need. Short term unpaid volunteerism is fine - a little altruism isn’t invasive. Open Source has turned out to be a marathon, not a sprint, and the original simple unpaid volunteer model is failing. Here are three gaps I think need to be urgently addressed: - paying wages gives large commercial players control of features - multiple unfunded critical projects exist with few new maintainers - supply chain & other security attacks need to be countered > On 2 Nov 2025, at 13:13, Theodore Tso via TUHS wrote: > > On Sat, Nov 01, 2025 at 12:45:14PM -0400, Clem Cole via TUHS wrote: >> Hrrmph. IMO: This is trying to fit the data over the graph you want. >> >> I never agreed with ESR's model. Linux was (and continues to be) a >> Cathedral. It just has different master builders than BSD, SunOS, SVR4, >> VMS, and NT did. > > Speaking as someone who was working on Linux from the beginning (I was > the first North American Linux kernel hacker, > and the first US FTP redistribution site for Linux > was my desktop workstation at MIT > (a Vaxstation 3100 M38 running Ultrix), > I never agreed with ESR's model either, > and I agree that Mr. Garcia's graph is also... not reflective of reality. > > First of all, ESR never spoke for the Linux community; > certainly not all of it. > > > It's complicated(tm). > > On Sat, Nov 01, 2025 at 10:05:47AM -0700, Alan Coopersmith via TUHS wrote: >> On 11/1/25 07:42, A. P. Garcia via TUHS wrote: >>> Linux took the opposite path. Its ecosystem is messy, distributed, and >>> loud, a bazaar where competing ideas coexist until one wins by survival, >>> not decree. It doesn’t import technologies wholesale. It reinvents them >>> from first principles. >>> >>> That’s why instead of adopting DTrace, Linux built eBPF, a programmable >>> virtual machine for tracing, networking, and observability. It’s more >>> complex, less elegant, but more adaptable. >> >> Except of course, Linux built eBPF on top of BPF, a technology imported >> wholesale from BSD. The difference between how Linux looked at Dtrace & >> BPF is one of license terms, not philosophy - they were willing to accept >> BSD-licensed imports, but not CDDL-licensed ones. > > Absolutely. Linux is quite willing to take ideas and code from > everywhere, so long as (a) it's good, and (b) the copyright license is > compatible. For example, Read-Copy-Update (RCU) was a technique that > was created and patented by Sequent, and after IBM purchased Sequent, > IBM donated the patent to Linux, and had former Sequent engineers > (working for IBM's Linux Technology Center) implement RCU for Linux. > > We'll take code and ideas from whereever we can get them. > > If Oracle hadn't outbid IBM to purchase Sun Microsystems (and some of > us believe that some executives at Sun leaked details of the > negotiation to the Wall Street Journal to draw competitors such as > Oracle to bid on Sun; certainly IBM had no incentive to leak what came > out in the press), it is very likely that I would have been on the > teams sent to Sun, and we would have tried to relicense DTrace and ZFS > from the CDDL to the GPL or some GPL-compatible license. > > It's interesting to consider what the alternate history might have > been if we could have merged the best of the Solaris technology into > Linux, and if we could have welcomed some of the Solaris team into the > Linux community. I certainly had nothing but respect for them, and I > always thought that they had been badly let down by their management > and sales teams. They deserved better. > > My personal belief is that Oracle's acquisition of Sun Microsystems, > while it may have represented a better deal for Sun's shareholders, > was ultimately a tragedy for the industry as a whole. > > Cheers, > > - Ted —————— 1. the Copyleft / GPL / LGPL were a major innovation, it’s allowed the Linux kernel to become what it is, by enabling many large commercial firms to pour money & ‘resources’ (people) into it, and to fund the Boring but Essential Bits like testing. Open Source isn’t a ‘Commercial Project’, people do what they want not directed, by definition. Uninteresting stuff doesn’t get done, no matter how useful And He Who Pays Wages decides what is done. I don’t believe either ESR or RMS foresaw the role of commercial firms & the size of the Linux code base ( 40M LoC, Feb 2025 ) —————— 2. ESR / RMS never came up with a model to collect ‘donations’ and pay volunteers, immortalised in XKCD ‘Dependency’. Unpaid volunteers works at very small scale, but doesn't scaled very large codebases. Lists multiple examples Alt-text description: Someday ImageMagick will finally break for good and we'll have a long period of scrambling as we try to reassemble civilization from the rubble. Image text: A project some random person in Nebraska has been thanklessly maintaining since 2003 Google AI claims "XKCD ‘Dependency’: 25 July 2013, 1354th comic”, but I can’t confirm that claim. —————— 3. OSS doesn’t have a good security model across all products, assumes all those random volunteers are ‘good faith’ actors. Supply Chain Attacks are a live threat that has to be managed / mitigated. Since 2013, when Mandiant published details of “APT1”, it’s not been theoretical that patient, skilled, well-funded Actors could & would target commercial organisations. The almost successful XZ utils attack, under a GPL, demonstrated that ‘bad faith’ patient & skilled actors are a real risk to Open Source. Project: —————— -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From tuhs at tuhs.org Sun Nov 2 14:02:32 2025 From: tuhs at tuhs.org (A. P. Garcia via TUHS) Date: Sun, 2 Nov 2025 00:02:32 -0400 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <20251102021322.GA77791@macsyma.lan> References: <196f66ae-5f43-44b4-bd61-d661630a0970@oracle.com> <20251102021322.GA77791@macsyma.lan> Message-ID: > Speaking as someone who was working on Linux from the beginning (I was the first North American Linux kernel hacker, and the first US FTP redistribution site for Linux was my desktop workstation at MIT (a Vaxstation 3100 M38 running Ultrix), I never agreed with ESR's model either, and I agree that Mr. Garcia's graph is also... not reflective of reality. I gave my little book report to the class. Somewhere in the room, a throat cleared. And then Ted Tso began to speak... What can I say? I once read a literary criticism of Conrad’s Nostromo titled Record and Reality by Edward Said, about how the stories we tell about events eventually diverge from how they actually happened. I think that’s what happened here too. I told the record, and then reality showed up. Thank you, Ted. I stand corrected on many things. From tuhs at tuhs.org Sun Nov 2 15:11:40 2025 From: tuhs at tuhs.org (Arnold Robbins via TUHS) Date: Sat, 01 Nov 2025 23:11:40 -0600 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> References: <20251102021322.GA77791@macsyma.lan> <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> Message-ID: <202511020511.5A25BerU064364@freefriends.org> > Open Source has turned out to be a marathon, not a sprint, > and the original simple unpaid volunteer model is failing. That's for sure. > - multiple unfunded critical projects exist with few new maintainers There's an XKCD cartoon about this. Chet Ramey maintains Bash, I maintain gawk, and there are other important/critical GNU tools with just a few maintainers who have - been at it for decades, - are getting older and wouldn't mind scaling back (speaking at least for myself), - are having trouble finding people willing to take over (also, speaking at least for myself). I have heard similar things from the current Emacs maintainer who is even older than I am (he's in his late 60s). I suspect there are multiple reasons for this, but the bottom line is that if the next generation of maintainers doesn't step up to the plate, a lot of important tools are going to start suffering bit-rot. Arnold From tuhs at tuhs.org Sun Nov 2 15:36:13 2025 From: tuhs at tuhs.org (Warner Losh via TUHS) Date: Sat, 1 Nov 2025 23:36:13 -0600 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <202511020511.5A25BerU064364@freefriends.org> References: <20251102021322.GA77791@macsyma.lan> <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> <202511020511.5A25BerU064364@freefriends.org> Message-ID: On Sat, Nov 1, 2025, 11:11 PM Arnold Robbins via TUHS wrote: > > Open Source has turned out to be a marathon, not a sprint, > > and the original simple unpaid volunteer model is failing. > > That's for sure. > > > - multiple unfunded critical projects exist with few new maintainers > > There's an XKCD cartoon about this. Chet Ramey maintains Bash, I maintain > gawk, and there are other important/critical GNU tools with just a few > maintainers who have > > - been at it for decades, > > - are getting older and wouldn't mind scaling back (speaking at > least for myself), > > - are having trouble finding people willing to take over (also, speaking > at least for myself). > > I have heard similar things from the current Emacs maintainer who is > even older than I am (he's in his late 60s). > > I suspect there are multiple reasons for this, but the bottom line > is that if the next generation of maintainers doesn't step up to > the plate, a lot of important tools are going to start suffering > bit-rot. > Yes. When we started with open source, it was a passion project. That passion was infectious. Others caught it too. They sent patches and some became passionate. As things grew, more and more people got paid. Even though the passionate folks got money or patches or both. But the money fueled development, but people got passionate at a much lower rate so the talent pool has dried up a bit, especially where there isn't a lot of money flowing in. It also made it harder to build a name contributing to open source causally. Competing with paid professionals is hard and getting noticed to have a chance became more difficult via that route. So the benches are shallower as people do other things with their passion time and the reward equasion has shifted. Imho, volunteers built the open source movement, but couldn't sustain it on volunteerism. The money sustains it now, but the dynamic that started and nutured it has shifted. Nothing really replaced the "we are all in this together" aspects of the early days. I'll totally admit this is a bit of a simplification, but tracks decently well with my involvement with open source over the last 40 years... Warner Arnold > From tuhs at tuhs.org Sun Nov 2 16:25:16 2025 From: tuhs at tuhs.org (Arnold Robbins via TUHS) Date: Sun, 02 Nov 2025 00:25:16 -0600 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: References: <20251102021322.GA77791@macsyma.lan> <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> <202511020511.5A25BerU064364@freefriends.org> Message-ID: <202511020625.5A26PG0p069598@freefriends.org> Warner Losh wrote: > As things grew, more and more people got paid. Even though the passionate > folks got money or patches or both. I never got a job doing Free Software, even though I tried a time or two. Maybe once or twice (in 38 years!) someone paid me to do something on gawk for them. 't'would have been nice to have made a living with this passion. > I'll totally admit this is a bit of a simplification, but tracks decently > well with my involvement with open source over the last 40 years... Yes, I'll agree. Arnold From tuhs at tuhs.org Sun Nov 2 17:57:20 2025 From: tuhs at tuhs.org (Andy Kosela via TUHS) Date: Sun, 2 Nov 2025 08:57:20 +0100 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: On Saturday, November 1, 2025, Marc Donner via TUHS wrote: > Well, the samples you exhibit are fairly verbose and rely on keys that are > either difficult to touch type (most people who can touch type or pseudo > touch type are primarily facile with the home key row and the row > immediately above and below. Most touch typists can not touch type numbers > and almost none can touch type the shifted top row keys (!@# ... _+). > Worse yet, too many keyboard makers move the keys around depending on magic > that no one understands. > > So verbose and hard to type stuff will be hard to persuade people to use on > the command line. In a program, yes, but on the keyboard, I'm skeptical. > > And for more complex things, we have Python and other programming / > scripting languages that are more than adequate. > > > I have to agree with Mark. It is too verbose and smells more like a modern programming language than a shell language. I tend to simplify things and throughout the years created a curated list of aliases and functions which gives me a universal Unix command line language. It is mostly based on one letter abbreviations, e.g. 'v' for vi(1), 'u' for uptime(1), 'c' for cat(1), 'g' for grep(1), etc. The gold standard for me was always the venerable ed(1) and its clever ways of expressing thoughts and ideas. For modern Kubernetes ecosystem I am using abbreviated simple three letter tokens. Instead of typing 'kubectl get pod' or 'kubectl describe pod', I am using get, des, log, del, exe, img etc. omitting kubectl keyword entirely. This model is much more consistent and faster to type than the default one. This general attraction towards simplicity and minimalism in pure text interfaces always fascinated me. My interest in the occult alphabets and ancient philosophy only strengthens my view that wise men since time immemorial have always occupied their minds with studying letters and numbers. By its different combinations and permutations it was believed all forces of nature could be understood and changed. The classic Shem HaMephorash -- 72 divine names consisting of three letters is a good example of such an ancient text interface, embodying profound ideas and concepts. Programming in general and shell interface in particular are just another interpretation of those ancient ideas that symbolic letters and numbers are the key to understand the universe. To me command line text interface will always be the most elegant way to communicate with machines. --Andy From tuhs at tuhs.org Sun Nov 2 19:12:47 2025 From: tuhs at tuhs.org (=?utf-8?q?Cameron_M=C3=AD=C4=8Be=C3=A1l_Tyre_via_TUHS?=) Date: Sun, 02 Nov 2025 09:12:47 +0000 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <202511020511.5A25BerU064364@freefriends.org> References: <20251102021322.GA77791@macsyma.lan> <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> <202511020511.5A25BerU064364@freefriends.org> Message-ID: Good morning! I have followed this thread with great interest. The comments about unpaid volunteers and naintainers, I agree with. My own "drop in the ocean" solution is to donate to individuals or groups that create/fork/maintain things I consider essential to my daily FOSS-centered life. Every dollar I give, I hope and pray there are at least ten other people giving a dollar also. It's often advertised as "buy the creator a coffee" but I'm sure "buy the maintainer some server space" might often be more on the mark. To all the creators, maintainers, everyone that keeps essential, much used projects alive, thank you. Cameron Sent from Proton Mail for Android. -------- Original Message -------- On Sunday, 11/02/25 at 05:12 Arnold Robbins via TUHS wrote: > Open Source has turned out to be a marathon, not a sprint, > and the original simple unpaid volunteer model is failing. That's for sure. > - multiple unfunded critical projects exist with few new maintainers There's an XKCD cartoon about this. Chet Ramey maintains Bash, I maintain gawk, and there are other important/critical GNU tools with just a few maintainers who have - been at it for decades, - are getting older and wouldn't mind scaling back (speaking at least for myself), - are having trouble finding people willing to take over (also, speaking at least for myself). I have heard similar things from the current Emacs maintainer who is even older than I am (he's in his late 60s). I suspect there are multiple reasons for this, but the bottom line is that if the next generation of maintainers doesn't step up to the plate, a lot of important tools are going to start suffering bit-rot. Arnold From tuhs at tuhs.org Sun Nov 2 19:40:13 2025 From: tuhs at tuhs.org (=?utf-8?q?Cameron_M=C3=AD=C4=8Be=C3=A1l_Tyre_via_TUHS?=) Date: Sun, 02 Nov 2025 09:40:13 +0000 Subject: [TUHS] evolution of the cli In-Reply-To: References: Message-ID: <5YsGcqU3O8G_TOR3BqBvkWsvHT-N3ooHWPlULaU1KDO_xh3HPduTjKx35rrUCaOv5CX10gp-DtP5r6bnwzLx0wqpqRLcfyQsqFgcKv-oY-A=@protonmail.ch> Andy, Wow, you've written what was stuck in my head but you've written it way more eloquently than I could have. Also thank you for educating me on the Shem HaMephorash. I was guilty of drooling over the new upcoming GUIs in the late 1980s and wishing my system at the time had enough memory to run them better. Instead of completing college essays and projects on text-based editors that worked just fine, I wasted time running GEM desktop even though my system struggled with only 512 kB of memory. With wisdom since gained and the magic of hindsight, I've taken to reversing my behavior from back then and, being a fairly simple guy, tools such as ed, despite being simpler, actually make me more productive because the only thing on the screen is what I've typed, nothing else to distract. Cameron Andy wrote: I have to agree with Mark. It is too verbose and smells more like a modern programming language than a shell language. I tend to simplify things and throughout the years created a curated list of aliases and functions which gives me a universal Unix command line language. It is mostly based on one letter abbreviations, e.g. 'v' for vi(1), 'u' for uptime(1), 'c' for cat(1), 'g' for grep(1), etc. The gold standard for me was always the venerable ed(1) and its clever ways of expressing thoughts and ideas. For modern Kubernetes ecosystem I am using abbreviated simple three letter tokens. Instead of typing 'kubectl get pod' or 'kubectl describe pod', I am using get, des, log, del, exe, img etc. omitting kubectl keyword entirely. This model is much more consistent and faster to type than the default one. This general attraction towards simplicity and minimalism in pure text interfaces always fascinated me. My interest in the occult alphabets and ancient philosophy only strengthens my view that wise men since time immemorial have always occupied their minds with studying letters and numbers. By its different combinations and permutations it was believed all forces of nature could be understood and changed. The classic Shem HaMephorash -- 72 divine names consisting of three letters is a good example of such an ancient text interface, embodying profound ideas and concepts. Programming in general and shell interface in particular are just another interpretation of those ancient ideas that symbolic letters and numbers are the key to understand the universe. To me command line text interface will always be the most elegant way to communicate with machines. --Andy From tuhs at tuhs.org Mon Nov 3 00:19:05 2025 From: tuhs at tuhs.org (Theodore Tso via TUHS) Date: Sun, 2 Nov 2025 09:19:05 -0500 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <202511020625.5A26PG0p069598@freefriends.org> Message-ID: <20251102141905.GB77791@macsyma.lan> On Sat, Nov 01, 2025 at 08:47:24PM -0600, Warner Losh wrote: > Indeed. At times, though, part of what it takes to get good code into the > tree has an element of politics about it. It's another aspect of open > source ESR's model fails to capture. Any time you have people involved, there will always be politics. But one of the reasons why it can be especially challenging to get "good code" into a project is because it's not just about the source code in isolation. It's also about the team behind the source code. Unfortunately, the Linux kernel code has been plagued by drive by code submissions. Unfortunately, a lot of compaies have wanted to get code into the kernel --- and then disappear. If users then start to depend on the code, and the original code submitters have disappeared, perhaps because their company has reassigned the team to next two or three generation mobile handsets, then the open source community is left holding the bag. This is why people who have a known track record are given more latitude; people who are trusted that they will stick around, even if they need to use their own personal time to keep the code mintained. That's why unknown deevlopers might be required to wait until their code is code to perfect before being integrated, both because if they *do* disappear, it won't be that disastrous, and because that way people can be more comfortable that their personal (not just corporate) dedication to their code base is keep them around even if they get laid off or reassigned. Another issue is when the code requires massive change outside of the subsystem. I've heard ZFS (at least when it was first developed for Solaris) described as not exactly a file system, but a massive memory management subsystem with one hell of a backing store. If the code require changes in other subsystems, there will of course need to be a lot of negotiation, since the maintainers of the adjacent subsystems would have to support the new abstractions, and in some cases, the new file system (cough, *bcachefs* cough) would have put entirely new demands on the subsystem --- and of course, the core developers of that new subsystem often believe their needs are more important anyone elses', including other file systems.... And finally, there if the developers of that new subsystem are sufficiently toxic, that a large number of maintainers in adjacent subsystems start refusing to attend in person because that person is an arrogate S.O.B., denigrating other people's code, compentence, and paternity, that subsystem might need to be ejected from the code base after it had been accepted. That's always painful, full of drama, and while it might be good for view counts on various YouTube chanels, and clicks on Phoronix, it's better if that can be avoided. And that's can be another reason why a project might be hesitant to accept code contribution sight unseen. It's *never* just about the source code. I will say that, speaing personally, there were attempts to recriut me into working on GNU HURD and NetBSD in the early 1990's. However, there were at least one or two people in the Cambridge, Massachusetts that were *actively* toxic, or otherwise unpleasant to work with, that this was never an option for me. So it's always interesting to hear people talk about the supposed rough edges of Linus Torvalds; compared to some of the personalities that I've experienced, and broken bread with, I would prefer to work with Linus any day of the week. On Sun, Nov 02, 2025 at 12:25:16AM -0600, Arnold Robbins via TUHS wrote: > Warner Losh wrote: > > > As things grew, more and more people got paid. Even though the passionate > > folks got money or patches or both. > > I never got a job doing Free Software, even though I tried a time > or two. Maybe once or twice (in 38 years!) someone paid me to do something > on gawk for them. 't'would have been nice to have made a living > with this passion. This is another way in which people who opine about winning Open Source strategies are very much influenced by survivorship bias. Projects like GCC and the Linux Kernel have enough surplus value that companies who invest a relatively small amount of information can see multiple times the initial investment. That's why Cygnus Support was often able to sell the same GCC improvement to multiple companies, and then only develop the feature once. But that's not true for all projects. Even if the project is super important, such as say, for example, xz or openssl, even if many companies depend on the software component, if there aren't potential improvements that would result in return on investment in terms of something that a company could sell as a product or service --- most companies won't invest in the open source project for its own sake. This why I stress that it's extremely useful for an open source maintainer to have product management skills as part of their toolset. And it's not just xz, but it's also projects like.... gawk, bash, emacs, etc. So just because a particular model works for gcc and the Linux kernel, we should be careful not to assume that it's just because of the particular development practices of that component. It very might be that the projects that have been extremely successful were just outliers, based on where they fit inside the ecosystem, and the business leverage that might have. - Ted From tuhs at tuhs.org Mon Nov 3 00:31:45 2025 From: tuhs at tuhs.org (Hauke Fath via TUHS) Date: Sun, 2 Nov 2025 15:31:45 +0100 Subject: [TUHS] 3 essays on the ujnix legacy In-Reply-To: <202511020625.5A26PG0p069598@freefriends.org> References: <20251102021322.GA77791@macsyma.lan> <0073F4AD-379A-45F9-9576-8F8512FB35E2@canb.auug.org.au> <202511020511.5A25BerU064364@freefriends.org> <202511020625.5A26PG0p069598@freefriends.org> Message-ID: <20251102153145420461.a5b107c0@Espresso.Rhein-Neckar.DE> On Sun, 02 Nov 2025 00:25:16 -0600, Arnold Robbins via TUHS wrote: > I never got a job doing Free Software, even though I tried a time > or two. Maybe once or twice (in 38 years!) someone paid me to do something > on gawk for them. 't'would have been nice to have made a living > with this passion. From somebody who finds himself increasingly gravitating to awk in daily work the more he is getting familiar with its ins and outs -- thank you for your work. applies, I guess. Cheerio, Hauke -- Hauke Fath Linnéweg 7 64342 Seeheim-Jugenheim Germany