From grog at lemis.com Fri Aug 3 10:16:33 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Fri, 3 Aug 2018 10:16:33 +1000 Subject: [COFF] [TUHS] Australian Computer Museum Society collection faces bulldozers In-Reply-To: <20180802142531.GA16356@freaknet.org> References: <20180802142531.GA16356@freaknet.org> Message-ID: <20180803001633.GB43761@eureka.lemis.com> [redirected to COFF] On Thursday, 2 August 2018 at 14:25:31 +0000, asbesto wrote: > > IDK if you know this, but > > https://www.itnews.com.au/news/australian-computer-museum-society-collection-faces-bulldozers-499452 I sent them mail offering to help, but no answer so far. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From peter at rulingia.com Mon Aug 6 12:48:14 2018 From: peter at rulingia.com (Peter Jeremy) Date: Mon, 6 Aug 2018 12:48:14 +1000 Subject: [COFF] [TUHS] In Memoriam: Per Brinch Hansen In-Reply-To: <201808021244.w72CiuU4025022@tahoe.cs.Dartmouth.EDU> References: <201808021244.w72CiuU4025022@tahoe.cs.Dartmouth.EDU> Message-ID: <20180806024814.GA79584@server.rulingia.com> [Moved from TUHS to COFF] On 2018-Aug-02 08:44:56 -0400, Doug McIlroy wrote: >My collection of early computer manuals includes Brinch Hansen's manual >for the RC 4000, which stands out for its precise description of the >CPU logic--in Algol 60! It's the only manual I have seen that offers a >good-to-the-last-bit formal description of the hardware. The book "A Programming Language" by Kenneth Iverson included a formal description of the IBM 7090 in Iverson Notation (now APL). I believe that is the first formal description of any computer. The success of that led IBM to include a formal description of the System/360 architecture in the IBM Systems Journal issue introducing the S/360. I've been told that IBM has since regretted that decision since it opened the way for other manufacturers to clone the S/360. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cym224 at gmail.com Tue Aug 7 01:25:03 2018 From: cym224 at gmail.com (Nemo) Date: Mon, 6 Aug 2018 11:25:03 -0400 Subject: [COFF] [TUHS] In Memoriam: Per Brinch Hansen In-Reply-To: <20180806024814.GA79584@server.rulingia.com> References: <201808021244.w72CiuU4025022@tahoe.cs.Dartmouth.EDU> <20180806024814.GA79584@server.rulingia.com> Message-ID: On 05/08/2018, Peter Jeremy wrote: > The book "A Programming Language" by Kenneth Iverson included a formal > description of the IBM 7090 in Iverson Notation (now APL). I believe that > is the first formal description of any computer. The success of that led > IBM to include a formal description of the System/360 architecture in the > IBM Systems Journal issue introducing the S/360. Blaauw & Brooks wrote a massive tome describing all manners of machines in APL ("Computer Architecture: Concepts and Evolution", A-W). I am not familiar with it. Is anyone on the list familiar with it? > I've been told that IBM has since regretted that decision since it opened > the way for other manufacturers to clone the S/360. Interesting! N. From dave at horsfall.org Tue Aug 7 10:41:09 2018 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 7 Aug 2018 10:41:09 +1000 (EST) Subject: [COFF] Happy birthday, Harvard Mk I! Message-ID: One of the first digital computers in the world[*], the Harvard Mk I, was dedicated on this day in 1944; it was an enormous electromechanical beast. [*] Please, no discussion on what constitutes a "digital computer" as we've pretty much done that to death; it all comes down to a matter of definition to suit one's agenda. -- Dave From dave at horsfall.org Tue Aug 7 11:01:00 2018 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 7 Aug 2018 11:01:00 +1000 (EST) Subject: [COFF] [TUHS] In Memoriam: Per Brinch Hansen In-Reply-To: References: <201808021244.w72CiuU4025022@tahoe.cs.Dartmouth.EDU> <20180806024814.GA79584@server.rulingia.com> Message-ID: On Mon, 6 Aug 2018, Nemo wrote: [ APL description of the S/360 ] >> I've been told that IBM has since regretted that decision since it >> opened the way for other manufacturers to clone the S/360. > > Interesting! I think Fujitsu were the biggest clones, followed by Hitachi? And I seem to recall at least one Russian clone... -- Dave From grog at lemis.com Tue Aug 7 11:28:47 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Tue, 7 Aug 2018 11:28:47 +1000 Subject: [COFF] Choice of time zone (was: In Memoriam: Edsger Dijkstra, and happy birthday Jon Postel!) In-Reply-To: References: <20180806011545.29C1F18C09A@mercury.lcs.mit.edu> Message-ID: <20180807012847.GA6963@eureka.lemis.com> On Tuesday, 7 August 2018 at 10:51:12 +1000, Dave Horsfall wrote: > ...; you won't believe the number of "history" sites I've seen that > directly contradict each other (even when taking UTC/US/local time > into account, and I try and use local time wherever possible). To further confuse the issue? With UTC we know where we are. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From bakul at bitblocks.com Tue Aug 7 12:04:20 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Mon, 6 Aug 2018 19:04:20 -0700 Subject: [COFF] [TUHS] In Memoriam: Per Brinch Hansen In-Reply-To: References: <201808021244.w72CiuU4025022@tahoe.cs.Dartmouth.EDU> <20180806024814.GA79584@server.rulingia.com> Message-ID: <0CBDDA2C-0095-4C55-8037-5DC401230F6C@bitblocks.com> > On Aug 6, 2018, at 6:01 PM, Dave Horsfall wrote: > > On Mon, 6 Aug 2018, Nemo wrote: > > [ APL description of the S/360 ] > >>> I've been told that IBM has since regretted that decision since it opened the way for other manufacturers to clone the S/360. >> >> Interesting! > > I think Fujitsu were the biggest clones, followed by Hitachi? And I seem to recall at least one Russian clone... The ES EVM (ЕС ЭВМ)series. At IIT Bombay we had a model EC-1030. From dave at horsfall.org Tue Aug 7 12:08:11 2018 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 7 Aug 2018 12:08:11 +1000 (EST) Subject: [COFF] Choice of time zone (was: In Memoriam: Edsger Dijkstra, and happy birthday Jon Postel!) In-Reply-To: <20180807012847.GA6963@eureka.lemis.com> References: <20180806011545.29C1F18C09A@mercury.lcs.mit.edu> <20180807012847.GA6963@eureka.lemis.com> Message-ID: On Tue, 7 Aug 2018, Greg 'groggy' Lehey wrote: >> into account, and I try and use local time wherever possible). > > To further confuse the issue? With UTC we know where we are. No; out of respect for the people who were actually there at the time (hence America can be a day behind). I'll give UTC some thought (which I use for space-related stuff anyway) but for example I have no idea which American timezone is which (I only know British i.e. UTC and Australian). On the other hand, I may just stop posting historical stuff completely; I don't really care either way, and I would really like to get back to Amateur ("ham") radio and electronics in general. -- Dave (VK2KFU) From grog at lemis.com Fri Aug 10 12:23:10 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Fri, 10 Aug 2018 12:23:10 +1000 Subject: [COFF] Why did Motorola fail? Message-ID: <20180810022310.GE4245@eureka.lemis.com> Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with the exception of this upstart SPARC thing. And then they were gone. I'm trying to remember why. Can anybody help me? I recall claims that Moto didn't put enough effort into development, but was this primarily a technical or a commercial issue? Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From stewart at serissa.com Fri Aug 10 12:55:14 2018 From: stewart at serissa.com (Lawrence Stewart) Date: Thu, 9 Aug 2018 22:55:14 -0400 Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810022310.GE4245@eureka.lemis.com> References: <20180810022310.GE4245@eureka.lemis.com> Message-ID: Well RISC happened. I suppose SPARC was part of that, but it was preceded by the IBM 801 and evolved along with the MIPS R2000 and R3000 and the HP PA-RISC. In those days, semiconductor density wasn’t so high, and RISCs were substantially simpler. Then Digital kicked everyone in the teeth with 200 MHz Alpha parts in the early ‘90s and we were off on the clock races. Later, as density improved, CISCs became competitive again and by 1994 or so PCs running BSD were the systems of choice at my startup. It was too late for Motorola. One thing I’m puzzled about is that TI never really made a run. They had very nice, fast, DSP chips around then, and it wouldn’t have been that hard to put together a decent general purpose chip, but it never happened. > On 2018, Aug 9, at 10:23 PM, Greg 'groggy' Lehey wrote: > > Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with > the exception of this upstart SPARC thing. And then they were gone. > I'm trying to remember why. Can anybody help me? I recall claims > that Moto didn't put enough effort into development, but was this > primarily a technical or a commercial issue? > > Greg > -- > Sent from my desktop computer. > Finger grog at lemis.com for PGP public key. > See complete headers for address and phone numbers. > This message is digitally signed. If your Microsoft mail program > reports problems, please read http://lemis.com/broken-MUA > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff From arno.griffioen at ieee.org Fri Aug 10 16:00:12 2018 From: arno.griffioen at ieee.org (Arno Griffioen) Date: Fri, 10 Aug 2018 08:00:12 +0200 Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810022310.GE4245@eureka.lemis.com> References: <20180810022310.GE4245@eureka.lemis.com> Message-ID: <20180810060012.GB3097@ancienthardware.org> On Fri, Aug 10, 2018 at 12:23:10PM +1000, Greg 'groggy' Lehey wrote: > Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with > the exception of this upstart SPARC thing. And then they were gone. > I'm trying to remember why. Can anybody help me? I recall claims > that Moto didn't put enough effort into development, but was this > primarily a technical or a commercial issue? I'd say it was mostly a commercial/business direction focus, perhaps combined with lack of funds, that killed off the M68k family as a workstation/server CPU in the end. This is just my personal experience as an old Amiga/Atari/Mac geek though, so the official internal Motorola story may be totally different. IMHO Motorola lost focus and started betting on too many horses/markets and spread it's resources too thin to keep up the CPU horsepower race. The M68k family itself was a good example of this with the company trying to push it more and more into wildly different markets with all sorts of wacky models that lacked various components (FPU, MMU, etc.) but in the process losing focus and as a result no longer investing in really making big steps or taking big leaps with the 'full fat' workstation/server models to keep the platform itself up to speed with competitors from AMD/Intel in the CISC arena. Probably seemed like a good idea on the short term to sell more (cheaper) units, but in the end it may have been too short-sighted. Someting like the 68060 was a nice CPU but Moto really dropped the ball on the introduction and providing 'companion' support/emulation info for the reduced functions in some areas (MMU and FPU mostly) that killed it off for many system designers. Around the Pentium era when Intel started goming out with the second generation that started exceeding the 100Mhz clock rate it became clear that Moto had lost the race and was seemingly not really interested anymore. (or had ran out of cash?) I did hear some rumours that an 68080 was on the drawing board and a push to move the instruction set of the family to 64-bit, but I don't know how much is/was true about that. Even though (IMHO) the M68k was a much nicer CPU environment to work with than the 8086-on-steroids CPU's, you couldn't argue with the raw MIPS speed for the low cost of the PC's by that time... By the same time Moto also dabbled in the RISC area with the ill-fated 88000 series that never really gained traction apart from some workstations and (again) lacking focus as to what they wanted to do with the platform as far as it being aimed at embedded or workstation/server use. (the multi-chip approach probably didn't make system builders too happy either..) In the end I suppose they kinda got what they wanted as they got fairly succesful in the embedded market with the Coldfire as an 'm68k-reimagined' CPU that was at least more focused as to it's market/task. Although even there I guess they missed the boat as ARM (and derivatives) is totally prevalent in bucketloads of smartphones and appliances while Colfdire CPU's are not as popular. Small side-step.. The Coldfire is still so close to the original M68k achitecturally that the Linux/m68k tree is shared/merged with the Coldfire CPU support and even though the Colfdire is aimed at mostly embedded use they are starting to sprout MMU's and other bits.. Bye, Arno. From dave at horsfall.org Fri Aug 10 17:24:39 2018 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 10 Aug 2018 17:24:39 +1000 (EST) Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810022310.GE4245@eureka.lemis.com> References: <20180810022310.GE4245@eureka.lemis.com> Message-ID: On Fri, 10 Aug 2018, Greg 'groggy' Lehey wrote: > Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with > the exception of this upstart SPARC thing. And then they were gone. I'm > trying to remember why. Can anybody help me? I recall claims that Moto > didn't put enough effort into development, but was this primarily a > technical or a commercial issue? Well, having Microsoft supporting Intel wouldn't've helped... -- Dave From bakul at bitblocks.com Fri Aug 10 17:44:21 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Fri, 10 Aug 2018 00:44:21 -0700 Subject: [COFF] Why did Motorola fail? In-Reply-To: Your message of "Fri, 10 Aug 2018 12:23:10 +1000." <20180810022310.GE4245@eureka.lemis.com> References: <20180810022310.GE4245@eureka.lemis.com> Message-ID: <20180810074429.07108156E400@mail.bitblocks.com> On Fri, 10 Aug 2018 12:23:10 +1000 Greg 'groggy' Lehey wrote: > > Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with > the exception of this upstart SPARC thing. And then they were gone. > I'm trying to remember why. Can anybody help me? I recall claims > that Moto didn't put enough effort into development, but was this > primarily a technical or a commercial issue? I think the greatest influence has to be what IBM choose for the PC. This is what Gates said in a 1997 interview with the PC Magazine: For IBM it was extremely different because this was a project where they let a supplier -- a partner, whatever you call us -- shape the definition of the machine and provide fundamental elements of the machine. When they first came to us, their concept was to do an 8-bit computer. And the project was more notable because they were going to do it so quickly and use an outside company....The novel thing was: could you work with outsiders, which in this case was mostly ourselves but also Intel, and do it quickly? And the key engineer on the project, Lou Eggebrecht, was fast-moving. Once we convinced IBM to go 16-bit (and we looked at 68000 which unfortunately wasn't debugged at the time so decided to go 8086), he cranked out that motherboard in about 40 days. Dave Bradley, who wrote the BIOS (Basic Input Output System) for the IBM PC, and many of the other engineers involved say IBM had already decided to use the x86 architecture while the project was still a task force preparing for management approval in August 1980. In a 1990 article for Byte, Bradley said there were four main reasons for choosing the 8088. First, it had to be a 16-bit chip that overcame the 64K memory limit of the 8-bit processors. Second, the processor and its peripheral chips had to be immediately available in quantity. Third, it had to be technology IBM was familiar with. And fourth, it had to have available languages and operating systems. Cribbed from: https://forwardthinking.pcmag.com/chips/286228-why-the-ibm-pc-used-an-intel-8088 From clemc at ccc.com Fri Aug 10 23:40:52 2018 From: clemc at ccc.com (Clem Cole) Date: Fri, 10 Aug 2018 09:40:52 -0400 Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810074429.07108156E400@mail.bitblocks.com> References: <20180810022310.GE4245@eureka.lemis.com> <20180810074429.07108156E400@mail.bitblocks.com> Message-ID: On Fri, Aug 10, 2018 at 3:44 AM, Bakul Shah wrote: > > I think the greatest influence has to be what IBM choose for > the PC. I agree.. I think you nailed it. FWIW: I used to commute to work with the Les Crudele, who was the lead HW guy on the 68000 (and later MIPS and few other things - pretty amazing guy).Les' stories of how Moto dropped the ball are classic. Frankly, think the 'failure' started with chip at the start. It was a skunkworks project in the back of a lab, and they had to hide the original mask charge. They borrowed time on a PDP-11/70 (running ISC's UNIX) to do their support. Tom Grunner told his bosses that they were playing with an idea and it was just a couple of guys, let them be. The hundreds were focused on the real product (6800 line), What's really interesting is that IBM and Moto were pretty tight at the time. MECL - Motorola Emitter Coupled Logic - had been designed by Moto for IBM for the System/360 and was licensed. When the original chip X-series chip (what would become the 68000) was fabbed, they sent 10 of them to a number of their partners (it did not have a number yet). IBM had them, as did we at Tektronix (and I've told the story of by cobbling together hacks on the Ritchie compiler to create something to emit what would become 68000 instructions in the summer of 1979 before it was announced). Les says that when IBM visited Austin to talk about a processor for their project, they had had the experimental chip running in the lab in NY/Conn. But Motorola marketing told them what they needed was the newly announced 6809 and that the device Les and team were making was just a test. No, plans for it. IBM insisted on a 16bit part (per the Gates recommendation discussed before and they knew others like Intel had them). Moto tried to show them the 16-bit extentions in the 6809. Les said, IBM kept asking, and asking about the 'other chip' but Moto management said it is not a product - the 6809 is. IBM would leave the Moto meeting, and the rest is history. BTW: the other story he tells is when Jobs did use the 68000 for what would become the Mac, Moto offered that base limit register MMU chip for free (which I've forgotten the number); but Jobs said they didn't want it, it would make the design too complicated. They were making a PC and did not need an MMU (remember the Xerox Alto's did not have either). The other thing of course is the hash they made of the 68000 instruction space with the Mac OS system traps fiasco. Then as Larry points out the CISC vs. RISC craze began, and the problem was that by the time the 88000; Intel had started to catch up in base performance. And the whole RISC vs. CISC thing was misunderstood -- economics won out. As I like to say, 'success' in the computer business is driven by economics as the high bit, not pure technology. Christensen's disruption theory explains it the best. The problem is that a new technology, particularly when it comes from within, is scary for an established firm, because it will erode the cash cow you already have. Moto was making big bucks with the 6800 and 6809 was the replacement -- that's what they had planned. The 68000 came from nowhere and not valued, so it was not given a chance. But the time they recognized its value, they had lost the important (economic) player (IBM). To be fair, at the time, I did not think Intel would be able to recover from the segmented log/short pointers issues of the 8086. The 386 was an awesome recovery from a technology standpoint, but it only came to be because of the economics of the PC. And by the way, it was also a back room solution when large numbers of the rest of the firm was working on new 'better' tech chasing the RISC chimera. BTW: Look at the missteps Intel made with architect -- i432, i860, i960, itianium. Interesting technology. As Larry said, Alpha was just amazing from DEC. But in the end, what mattered was economic volume. Better margins in all of these than the established tech, but they all lost in the end. It will be interesting to see if my firm realizes this we move into the future. High margins are something sr manager loves because it keeps profits up/the stock price high, until the disruption occurs..... then you are in trouble. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sat Aug 11 00:43:20 2018 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 10 Aug 2018 07:43:20 -0700 Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810022310.GE4245@eureka.lemis.com> References: <20180810022310.GE4245@eureka.lemis.com> Message-ID: <20180810144320.GA17564@mcvoy.com> On Fri, Aug 10, 2018 at 12:23:10PM +1000, Greg 'groggy' Lehey wrote: > Forty years ago Motorola 680x0 CPUs powered most good Unix boxen, with > the exception of this upstart SPARC thing. And then they were gone. The original SPARC CPU was 20K gates and was faster than the 68020. My guess is cheaper as well. I was at Sun as they were making the transition from 68K to SPARC and we all fought to get SPARC machines because of performance. I liked the 68K well enough, it was fairly nice in assembler (though my heart belongs to the PDP-11 first, the National 32032 next, and then the 68K for assembler). But the SPARC chips were just faster. From clemc at ccc.com Sat Aug 11 01:13:48 2018 From: clemc at ccc.com (Clem Cole) Date: Fri, 10 Aug 2018 11:13:48 -0400 Subject: [COFF] Why did Motorola fail? In-Reply-To: <20180810144320.GA17564@mcvoy.com> References: <20180810022310.GE4245@eureka.lemis.com> <20180810144320.GA17564@mcvoy.com> Message-ID: On Fri, Aug 10, 2018 at 10:43 AM, Larry McVoy wrote: > The original SPARC CPU was 20K gates and was faster than the 68020. > Yep... pretty clean design. > My guess is cheaper as well. Maybe -- TI was the fab right? Moto's fabs were pretty good, not as good at Intel in those days. IIRC TI was still transitioning from BiPolar to CMOS, and most of the fab capacity was still in their BiPolar area (somewhere along the line the bought Nat Semi). As I understand from a buddy who as at TI at the time, DSPs and SPARCs were driving the transistion. But they might not have been there yet. > I was at Sun as they were making the > transition from 68K to SPARC and we all fought to get SPARC machines > because of performance. > Yeah, they kicked butt. Less is more and all that. Had us worried at Stellar because we were doing custom (Gate Arrays). We had hit 22 MIPS, if I recall Sparc was 4-6 range; but that was pretty darned good for a single chip at the time. > > I liked the 68K well enough, it was fairly nice in assembler (though > my heart belongs to the PDP-11 first, the National 32032 next, and > then the 68K for assembler). But the SPARC chips were just faster. +1 althought I'd probably swap the 68K and 32032 because the National device (which was pretty much a vax on a chip, as the 68k was an 11 on a chip), was clean and cool, it was later in my life; so I knew the 68K better. BTW: Stellar was a 'RISCy' 68K with support for Fortran (*i.e.* indirection beyond pure load/store). We used to say all devices post Dave's papers were RISC ;-) FWIW: I never really thought much of the RISC chips, accept maybe the by the time of the MIPS 4400 series; but then again they were were designed for compiler writers. And the whole RISC thing was a bit of marketing. John Coche never said "reduce to the instruction set", he said "compile to (*i.e.* expose) the microcode." Dave sort of missunderstood his message. Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Sat Aug 11 01:45:09 2018 From: lm at mcvoy.com (Larry McVoy) Date: Fri, 10 Aug 2018 08:45:09 -0700 Subject: [COFF] Why did Motorola fail? In-Reply-To: References: <20180810022310.GE4245@eureka.lemis.com> <20180810144320.GA17564@mcvoy.com> Message-ID: <20180810154509.GA21082@mcvoy.com> On Fri, Aug 10, 2018 at 11:13:48AM -0400, Clem Cole wrote: > On Fri, Aug 10, 2018 at 10:43 AM, Larry McVoy wrote: > > My guess is cheaper as well. > > Maybe -- TI was the fab right? Moto's fabs were pretty good, not as good > at Intel in those days. I'm not sure who did the first SPARC, I'd guess TI, they did most of the follow on designs. I know Fujitsu did some SPARC chips but I'm not sure if that was solely for their own use or if Sun used those. But I remember a lot of TI chips and TI was not speedy turning those around, the hardware guys were under a lot of pressure to get it right the first time (and I think that was rare). > I never really thought much of the RISC chips, accept maybe the by the time > of the MIPS 4400 series; but then again they were were designed for > compiler writers. > > And the whole RISC thing was a bit of marketing. Yeah, I tend to agree. But there was some wisdom in less is more, it was easier to make the clocks go faster when the instructions are simple. From david at kdbarto.org Sat Aug 11 01:53:55 2018 From: david at kdbarto.org (David) Date: Fri, 10 Aug 2018 08:53:55 -0700 Subject: [COFF] Why did Motorola fail? (Cost and performance) In-Reply-To: <20180810154509.GA21082@mcvoy.com> References: <20180810022310.GE4245@eureka.lemis.com> <20180810144320.GA17564@mcvoy.com> <20180810154509.GA21082@mcvoy.com> Message-ID: Back when I was doing vision processing in a parallel way, I worked with the hardware guys to move from a 68010 to the 020 and then the 040. Each was a big step up in performance and we had little work to do in the software other than recompiling to get the better performance. And then I ran the vision algorithms on my Sparc desktop box (I think it was a SS10, maybe a SS20). And they ran about 30x faster. Same image, same results, 30x faster. So I spent some time to come up with a design that pulled the images from the camera directly into the Sparc host and with a little help from the hardware guys we had a board that would do that. So it came down to cost and performance. I loved the 68k processors and was very happy with them, but we could build custom frame grabbers cheaper and have the host do the work faster. David From wkt at tuhs.org Mon Aug 20 13:27:02 2018 From: wkt at tuhs.org (Warren Toomey) Date: Mon, 20 Aug 2018 13:27:02 +1000 Subject: [COFF] Formal Specification and Verification Message-ID: <20180820032702.GA29224@minnie.tuhs.org> I've forwarded this e-mail from the TUHS list. Feel free to respond to the thread here. Thanks, Warren ----- Forwarded message from "Perry E. Metzger" ----- Date: Sun, 19 Aug 2018 20:57:58 -0400 From: "Perry E. Metzger" To: George Michaelson Cc: TUHS main list Subject: Re: [TUHS] Formal Specification and Verification (was Re: TUHS Digest, Vol 33, Issue 5) On Mon, 20 Aug 2018 09:47:58 +1000 George Michaelson wrote: > Witness to this, the interest in *trying* to apply Coq to Ethereum > smart contracts... The Tezos cryptocurrency was created with a smart contract language specifically designed for formal verification. > I think Perry wrote something of long term value. I encourage you to > write this up somewhere you'd be willing to have published like a > tech blog. Not sure where one would publish it. I also note that the general response here was the one I almost always get when I mention this stuff to people, which is near silence. (I used to get exactly this response 30 or 35 years ago when explaining the Internet to most people who had not yet come into contact with it, so I suppose it's not overly surprising.) That said, let me make a strong prediction: in not very many years, pretty much everyone doing serious work in computer science (say, designing security protocols, building mission critical systems, etc.) will be building them using some sort of formal verification assistant systems. I suspect this change will be as transformative as the triumph of high level languages over the previous supremacy of machine language coding. (For those that forget, one of the things that made Unix successful was, of course, that unlike many other OSes of the era, it was (ultimately) written in a portable language and not in machine code.) People who don't use formal verification when writing serious code will seem as antiquated and irresponsible as people who build mission critical systems now without test coverage. This is going to be a big revolution. Perry > -G > > On Sun, Aug 19, 2018 at 5:57 AM, Perry E. Metzger > wrote: > > Sorry for the thread necromancy, but this is a _very_ important > > topic. Perhaps it doesn't belong on tuhs but rather on coff. > > > > This is a pretty long posting. If you don't care to read it, the > > TL;DR is that formal specification and verification is now a real > > discipline, which it wasn't in the old days, and there are > > systems to do it in, and it's well understood. > > > > On 2018-08-06 at 08:52 -0700, Bakul Shah wrote: > >> > >> What counts as a "formal spec"? Is it like Justice Potter > >> Stewart's "I know it when I see it" definition or something > >> better? > > > > At this point, we have a good definition. A formal specification > > is a description of the behavior of a program or piece of > > hardware in a precise machine-readable form that may be used as > > the basis for fully formal verification of the behavior of an > > implementation. Often these days, the specification is given in a > > formal logic, such as the predicative calculus of inductive > > constructions, which is the logic underlying the Coq system. > > > > Isabelle/HOL is another popular system for this sort of work. > > ACL2 is (from what I can tell) of more historical interest but it > > has apparently been used for things like the formal verification > > of floating point and cache control units for real hardware. (It > > is my understanding that it has been many years since Intel would > > dare release a system where the cache unit wasn't verified, and > > the only time in decades it tried to release a non-verified FPU, > > it got the FDIV bug and has never tried that again.) There are > > some others out there, like F*, Lean, etc. > > > > Formal specifications good enough for full formal verification > > have been made for a variety of artifacts along with proofs that > > the artifacts follow the specification. There's a fully formally > > verified C compiler called CompCert for example, based on an > > operational semantics written in Coq. There's another formal > > semantics for C written in K, which is a rather different formal > > system. There's a verified microkernel, seL4, whose specification > > is written in Isabelle/HOL. There's a fully formal specification > > of the RISC V, and an associated verified RTL implementation. > > > > Generally speaking, a formal specification: > > > > 1) Must be machine readable > > 2) The semantics of the underlying specification language must > > themselves be extremely precisely described. You can't prove > > the consistency and completeness of the underlying system (see > > G__del) but you _can_ still strongly specify the language. > > 3) Must be usable for formal (that is, machine checkable) proofs > > that implementations comply with the spec, so it has to be > > sufficiently powerful. Typical systems are now equivalent to > > higher order logic. > > > > From: "Hellwig Geisse" > > Sent:Mon, 06 Aug 2018 18:30:30 +0200 > >> > >> For me, a "formal spec" should serve two goals: > >> 1) You can reason about the thing that is specified. > > > > Yes. > > > > 2) The spec can be "executed" (i.e., there is an > >> interpreting mechanism, which lets the spec behave > >> like the real thing). > > > > Not always reasonable. > > > > First, it is often the case that a spec does not describe > > execution at all. See, for example, the specification of a > > sorting function I give at the end of this message: it simply > > says "a sorting function is a function such that, for all inputs, > > the return is a non-decreasing permutation of the input". This is > > not executable. It is a purely descriptive property, and you > > cannot extract an executable algorithm from the spec. > > > > Second, even when a spec amounts to a description of execution, a > > proof assistant often cannot actually execute it. For example, > > although you can reason about non-terminating execution in pCiC > > (and thus Coq), programs written in a strongly normalizing lambda > > calculus that can be used as a logic must terminate, or functions > > that did not terminate would be inhabitants of all types and the > > logic would be inconsistent. Thus, you cannot execute a program > > with infinite loops inside Coq, although you can reason about > > them (and indeed, you can reason about coinductively defined > > objects like infinite execution traces.) > > > > On Mon, 06 Aug 2018 14:19:31 -0700 "Steve Johnson" > > wrote: > >> I take a somewhat more relaxed view of what a spec should be: > >> It should describe a program with enough completeness that a > >> competent programmer could write it from the spec alone. > > > > I think this is a bit more relaxed than is currently accepted. > > > >> The formal systems I have seen would roll over and die when > >> presented with even a simple compiler.__ > > > > I don't know what this means. If it is that there aren't > > implementations of languages like pCiC, that's not true, see Coq. > > If it means no one can formally specify a compiler, given that > > formally verified compilers exist, that's also not true. > > > > The "final theorem" proving the correctness of CompCert depends on > > having an operational semantics of both C and the target > > architecture, and says (more or less) that the observed behavior > > of the input program in C is the same as the observed behavior of > > the output program (say in ARM machine language). This is a > > serious piece of work, but it is also something that has actually > > been done -- the tools are capable of the task. > > > >> Additionally, being able to specify that a particular function be > >> carried out by a heapsort, for example, would require that the > >> formalism could describe the heapsort and prove it correct.__ > >> These don't grow on trees... > > > > Formally verifying a couple of sorting algorithms described in > > Coq is a exercise for an intro level class on formal > > verification. I've done it. Once you have the proper primitives > > described, the specification for a sorting algorithm in Coq looks > > like this: > > > > Definition is_a_sorting_algorithm (f: list nat -> list nat) := > > forall al, Permutation al (f al) /\ sorted (f al). > > > > That says "the property "is_a_sorting_algorithm" over a function > > from lists of natural numbers to lists of natural numbers is that > > the output is a permutation of the input in which all the > > elements are in non-decreasing order." The definitions in > > question are very precise. For example, one definition of sorted > > (the property of being a non-decreasing list) is: > > > > Definition sorted (al: list nat) := > > forall i j, i < j < length al -> nth i al 0 <= nth j al 0. > > > > and the property of being a permutation, which is relatively > > complicated inductively defined property, is: > > > > Inductive Permutation {A : Type} : list A -> list A -> Prop := > > perm_nil : Permutation > > | perm_skip : forall (x : A) (l l' : list A), > > Permutation l l' -> > > Permutation (x :: l) (x :: l') > > | perm_swap : forall (x y : A) (l : list A), > > Permutation (y :: x :: l) (x :: y :: l) > > | perm_trans : forall l l' l'' : list A, > > Permutation l l' -> > > Permutation l' l'' -> > > Permutation l l''. > > > > Coq starts out with basically nothing defined, by the way. Notions > > such as "natural number" and "list" are not built in. Peano > > naturals are defined in the system thusly: > > > > Inductive nat : Type := > > | O : nat > > | S : nat -> nat. > > > > The underlying basis of Coq (i.e. the Predicative Calculus of > > Inductive Constructions) is a dependently typed lambda calculus > > that's astonishingly simple, and the checker for proofs in the > > system is only a few hundred lines long -- the checker is the > > only portion of the system which needs to be trusted. > > > > In recent years, I've noted that "old timers" (such as many of us, > > myself included) seem to be unaware of the fact that systems like > > Coq exist, or that it is now relatively (I emphasize > > _relatively_) routine for substantial systems to be fully > > formally specified and then fully formally verified. > > > > > > Perry > > -- > > Perry E. Metzger perry at piermont.com > -- Perry E. Metzger perry at piermont.com ----- End forwarded message ----- From bakul at bitblocks.com Mon Aug 20 14:01:10 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Sun, 19 Aug 2018 21:01:10 -0700 Subject: [COFF] [TUHS] Formal Specification and Verification (was Re: TUHS Digest, Vol 33, Issue 5) In-Reply-To: <20180818155733.523a3d2d@jabberwock.cb.piermont.com> References: <1533573030.3671.98.camel@mni.thm.de> <50772e199f3dcc5d4eba34d17322b5aef0aa0441@webmail.yaccman.com> <20180818155733.523a3d2d@jabberwock.cb.piermont.com> Message-ID: <1ABEAF4F-00FE-4F06-AD3A-B713EB2C9ADC@bitblocks.com> On Aug 18, 2018, at 12:57 PM, Perry E. Metzger wrote: > > Sorry for the thread necromancy, but this is a _very_ important > topic. Perhaps it doesn't belong on tuhs but rather on coff. Surely 12 days is not all that long a period?! > This is a pretty long posting. If you don't care to read it, the TL;DR > is that formal specification and verification is now a real > discipline, which it wasn't in the old days, and there are systems to > do it in, and it's well understood. > > On 2018-08-06 at 08:52 -0700, Bakul Shah wrote: >> >> What counts as a "formal spec"? Is it like Justice Potter >> Stewart's "I know it when I see it" definition or something >> better? > > At this point, we have a good definition. A formal specification is a > description of the behavior of a program or piece of hardware in a > precise machine-readable form that may be used as the basis for fully > formal verification of the behavior of an implementation. Often these > days, the specification is given in a formal logic, such as the > predicative calculus of inductive constructions, which is the logic > underlying the Coq system. What about Denotational Semantics? Or efforts such as Dines Bjørner's META-IV[1] and later VDM? I'd consider them formal specification systems even if not machine-readable or fully automatically verifiable. Perhaps the definition has tightened up now that we have things like Coq and Lamport's TLA+. > > Isabelle/HOL is another popular system for this sort of work. ACL2 is > (from what I can tell) of more historical interest but it has > apparently been used for things like the formal verification of > floating point and cache control units for real hardware. (It is my > understanding that it has been many years since Intel would dare > release a system where the cache unit wasn't verified, and the only > time in decades it tried to release a non-verified FPU, it got the > FDIV bug and has never tried that again.) There are some others out > there, like F*, Lean, etc. > > Formal specifications good enough for full formal verification have > been made for a variety of artifacts along with proofs that the > artifacts follow the specification. There's a fully formally verified > C compiler called CompCert for example, based on an operational > semantics written in Coq. There's another formal semantics for C > written in K, which is a rather different formal system. There's a > verified microkernel, seL4, whose specification is written in > Isabelle/HOL. There's a fully formal specification of the RISC V, and > an associated verified RTL implementation. seL4's verification is described here: https://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf They used Haskell to implement an "intermediate layer" between OS implementers and formal methods practitioners. The *generated* Isobelle scripts weigh in at 200K lines so I don't yet know what to think of this. There are a lot of assumptions made and crucially, for an OS kernel, they sidestep concurrency & non-determinism verification by using an event based model to avoid dealing with async. interrupts. [On the other hand, this is a 2009 paper and more work may have been done since to improve things]. > Generally speaking, a formal specification: > > 1) Must be machine readable > 2) The semantics of the underlying specification language must > themselves be extremely precisely described. You can't prove the > consistency and completeness of the underlying system (see Gödel) > but you _can_ still strongly specify the language. > 3) Must be usable for formal (that is, machine checkable) proofs that > implementations comply with the spec, so it has to be sufficiently > powerful. Typical systems are now equivalent to higher order logic. ... > In recent years, I've noted that "old timers" (such as many of us, > myself included) seem to be unaware of the fact that systems like Coq > exist, or that it is now relatively (I emphasize _relatively_) routine > for substantial systems to be fully formally specified and then fully > formally verified. This is good to know. But I still think these are merely different levels of abstractions. The issue then is how to prove a specification at a given level correct and how to prove that the mapping to a more concrete level implements the same semantics. I also wonder about the "_relatively_ routine" application of formal verification. May be for the Intels of the world but certainly not for 99.99..9% of software. Though hopefully it *will* become more common. Thanks to Hellwig Geisse, Noel Chiappa, Steve Johnson, & especially Perry Metzger for their replies. Bakul [1] Aside: I sort of stumbled upon this area decades ago when I picked up a stack of tech. reports Prof. Per Brinch-Hansen had thrown out. It had among other things Bjørner's tutorial on META-IV and Sussman & Steele's orig. Scheme report. These led me to denotational semantics via Milner & Strachey's Theory of Programming Language Semantics book. From perry at piermont.com Mon Aug 20 22:23:50 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 20 Aug 2018 08:23:50 -0400 Subject: [COFF] [TUHS] Formal Specification and Verification (was Re: TUHS Digest, Vol 33, Issue 5) In-Reply-To: <1ABEAF4F-00FE-4F06-AD3A-B713EB2C9ADC@bitblocks.com> References: <1533573030.3671.98.camel@mni.thm.de> <50772e199f3dcc5d4eba34d17322b5aef0aa0441@webmail.yaccman.com> <20180818155733.523a3d2d@jabberwock.cb.piermont.com> <1ABEAF4F-00FE-4F06-AD3A-B713EB2C9ADC@bitblocks.com> Message-ID: <20180820082350.4ff6a8a2@jabberwock.cb.piermont.com> On Sun, 19 Aug 2018 21:01:10 -0700 Bakul Shah wrote: > > At this point, we have a good definition. A formal specification > > is a description of the behavior of a program or piece of > > hardware in a precise machine-readable form that may be used as > > the basis for fully formal verification of the behavior of an > > implementation. Often these days, the specification is given in a > > formal logic, such as the predicative calculus of inductive > > constructions, which is the logic underlying the Coq system. > > What about Denotational Semantics? Orthogonal. You can have a denotational semantics that is expressed formally in Coq, or a denotational semantics that's expressed informally on paper. Similarly for the rest. (BTW, at the moment, operational semantics are generally considered superior to denotational for a variety of practical reasons that aren't worth getting into here at the moment.) > Or efforts such as Dines > Bj__rner's META-IV[1] and later VDM? I'd consider them formal > specification systems even if not machine-readable or fully > automatically verifiable. Those are no longer considered "formal" in the community. If it isn't machine checkable, it isn't formal. > Perhaps the definition has tightened > up now that we have things like Coq and Lamport's TLA+. Precisely. > > Formal specifications good enough for full formal verification > > have been made for a variety of artifacts along with proofs that > > the artifacts follow the specification. There's a fully formally > > verified C compiler called CompCert for example, based on an > > operational semantics written in Coq. There's another formal > > semantics for C written in K, which is a rather different formal > > system. There's a verified microkernel, seL4, whose specification > > is written in Isabelle/HOL. There's a fully formal specification > > of the RISC V, and an associated verified RTL implementation. > > seL4's verification is described here: > https://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf > > They used Haskell to implement an "intermediate layer" > between OS implementers and formal methods practitioners. > The *generated* Isobelle scripts weigh in at 200K lines so > I don't yet know what to think of this. That's not quite correct. What they did was build a model of the OS kernel in Haskell and then use it to derive Isabelle/HOL semantics. They then produced a C implementation they believed was observationally equivalent, and generated Isabelle/HOL descriptions of what that C layer did using a C semantics they had created, and then proved the two observationally equivalent. > There are a lot > of assumptions made and crucially, for an OS kernel, they > sidestep concurrency & non-determinism verification by > using an event based model to avoid dealing with async. > interrupts. [On the other hand, this is a 2009 paper and > more work may have been done since to improve things]. CompCert has also done a lot to improve on seL4. It has much better guarantees on concurrency etc., and it's C semantics and compilation are based on CompCert so they closed the loop on that stuff. As you might suspect, a lot has happened in the last decade. > But I still think these are merely different levels of > abstractions. The issue then is how to prove a specification > at a given level correct You can't prove a specification correct. If you commit an infelicity in specifying the C programming language, that mistake is neither "correct" nor "incorrect". Consider what happens if there's a bad idea in the in the paper ISO C standard -- it doesn't matter that it isn't a good idea, it's normative. What you _can_ do is demonstrate that a formal specification matches an intuitive idea of what is intended by humans, but it can never be a "proof". Now, there's an important point here, and it ought to be underlined and in boldface with blinking letters, but luckily for everyone this is plain text: **Formal verification is not a way of producing "perfect" software, because we cannot know that we've proven everything that someone might find important someday. It is, however, a _ratchet_.** Once you've figured out you need some property to hold and you've proven it, you've ratcheted and will not backslide. If you formally verify a property, you know, for good, that this property holds. If you prove non-interference or memory safety, that property holds, period. When someday you discover there was a property (say some freedom cache-timing side channels you hadn't realized was important), you add that property to your set of properties you verify, and once you've fixed the issue and have verified it, the problem is gone. That is to say: testing gives you, at best, a good guess that your software is free of some class of problems, but verification gives you assurance you will not backslide. You've ratcheted forward on quality. You can't know that you've asked the right questions, but you absolutely know what the answers are to the questions you asked, which is not true of testing. Even with this "imperfection", the ratchet is extremely powerful, and formal verification is vastly, vastly better than testing. CompCert has had a handful of issues found in its lifetime, versus tens of thousands in any other production compiler I can name, and all those issues were quite obscure (and none the result of a failure in verification). Perry -- Perry E. Metzger perry at piermont.com From dave at horsfall.org Wed Aug 22 09:07:54 2018 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 22 Aug 2018 09:07:54 +1000 (EST) Subject: [COFF] Happy birthday, Storm worm! Message-ID: On this day in 2007, the Storm worm hit the Internet; it was estimated that 57 million spams were sent in one day (most likely all from compromised Windoze boxes, as if there's any other sort). -- Dave From dave at horsfall.org Thu Aug 30 10:35:50 2018 From: dave at horsfall.org (Dave Horsfall) Date: Thu, 30 Aug 2018 10:35:50 +1000 (EST) Subject: [COFF] Happy birthday, John Mauchly! Message-ID: We gained computer pioneer John Mauchly on this day in 1907; he was best known as the co-inventor of ENIAC, one of the world's first computers. -- Dave