Planet Interactive Fiction

July 20, 2018

The People's Republic of IF

July Meeting Post Mortem

by Angela Chang at July 20, 2018 08:41 PM

Life of Zarf

Life of Zarf

The People’s Republic of Interactive Fiction convened on Thursday July 19th. Zarf, Jake, Dan, Eric W B and R(age 10), and I welcomed newcomers Ben Collins-Sussman and dynamic duo Erik J B and F(age 8).  Warning: what follows is probably not proper English, but just my log of notes from the meeting to jog people’s memories:

Zarf is working on his tribute to Anchorhead by Ryan Veeder, Cragne Manor in Inform , one month to complete
Zarf got a book called “Life of Zarf” found by Eric W B.
Jake playing IFComp 2017 rankings from the bottom up. Recommended Haunted P. He also shared hacking the The Murder The Fog to make it more legible.

Ben is also working on a room at Cragne Manor.

Eric w b’s daughter R is submitting to ifcomp but she can’t talk about it.

Erik JB and F. Played Wishbringer

Ben Collins-Sussman from Chicago talked about playing Wizard Sniffer with his kid.

ErikJB asked how to get started writing IF.

Ben has a collaborator named jack.  Advice: Write transcript first then code.

Zarf mentioned writing to about zork inventory and Brian Moriarty at wpi teaching
INTROCOMP is going on now.

Jake mentioned Transcript competition by Emily short.

Dan uses Google docs to track his narratives. Ben uses custom emacs macros.

Ben Writing on Chromebook  using inform on Linux, can run Linux vm crostini

Zarf thinks about the puzzle first, then works out mechanics. Writing is a needed. Eric WB thinks of the place first.

Ben does flow diagram first.

Erik JB said he downloaded some tool but it didn’t work for him.  Zarf says the First decision is  to figure out whether you want parser or hypertext… Tad’s/inform/ ve twine, quest, choicescript , ink by inkwell games, curveship, vorpal for inform 7 for JavaScript hooks. Dan is writing wreck, which is on github. Zard seltani system in Myst universe.

Ben ported infocom to Android when Google Play first started. Now fabularium is an option.

Upcoming

Zarf says he’s planning next summer IF conference in Boston. AdventureX in London. wordplay in Toronto. We usually have a small group at PaxEast. Mentioned that Getlamp IF documentary showed at PaxEast 2010 and had a great IF turnout.

Game loop is August 4th
Boston FIG indie games Sept 29th
FiGtalks in January, great for next generation of players

Anjchang is still collecting poems of two for the Taper ezine due Sept 1st.
http://taper.badquar.to/1/about.html

Playtest

Cragne zarf playtest, narrated by Jake starts in the Workroom. We had a great time and got to the endish.

Photos

https://photos.app.goo.gl/Ux6GN4xC9mFtBUke6

IFTF Blog

Read about IF and FOSS at Opensource.com

by Jason McIntosh at July 20, 2018 05:48 PM

Several months ago, after our recognition of the IF Archive’s 25th anniversary, a community moderator from Opensource.com contacted IFTF to ask if we’d like to write an article about the Archive. As the name of the website suggests, Opensource.com has interest primarily in articles about free and open-source software; while the Archive does host its share of FOSS, we ended up agreeing to write a more general study of the intersection between interactive fiction and open source.

That article, authored by yours truly, is now online, under the headline A brief history of text-based games and open source. I think it turned out pretty well, and I invite your comments on it. Enjoy!

The Digital Antiquarian

Doing Windows, Part 5: A Second Try

by Jimmy Maher at July 20, 2018 04:41 PM

The beginning of serious work on the operating system that would come to be known as OS/2 left Microsoft’s team of Windows developers on decidedly uncertain ground. As OS/2 ramped up, Windows ramped down in proportion, until by the middle of 1986 Tandy Trower had just a few programmers remaining on his team. What had once been Microsoft’s highest-priority project had now become a backwater. Many were asking what the point of Windows was in light of OS/2 and its new GUI, the Presentation Manager. Steve Ballmer, ironically the very same fellow who had played the role of Windows’s cheerleader-in-chief during 1984 and 1985, was now the most prominent of those voices inside Microsoft who couldn’t understand why Trower’s team continued to exist at all.

Windows survived only thanks to the deep-seated instinct of Bill Gates to never put all his eggs in one basket. Stewart Alsop II, editor-in-chief of InfoWorld magazine during this period:

I know from conversations with people at Microsoft in environments where they didn’t have to bullshit me that they almost killed Windows. It came down to Ballmer and Gates having it out. Ballmer wanted to kill Windows. Gates prevented him from doing it. Gates viewed it as a defensive strategy. Just look at Gates. Every time he does something, he tries to cover his bet. He tries to have more than one thing going at once. He didn’t want to commit everything to OS/2, just on the off chance it didn’t work. And in hindsight, he was right.

Gates’s determination to always have a backup plan showed its value around the beginning of 1987, when IBM informed Microsoft of their marketing plans for their new PS/2 hardware line as well as OS/2. Gates had, for very good reason, serious reservations about virtually every aspect of the plans which IBM now laid out for him, from the fortress of patents being constructed around the proprietary Micro Channel Architecture of the PS/2 hardware to an attitude toward the OS/2 software which seemed to assume that the new operating system would automatically supersede MS-DOS, just because of the IBM name. (How well had that worked out for TopView?) “About this time is when Microsoft really started to get hacked at IBM,” remembers Mark Mackaman, Microsoft’s OS/2 product manager at the time. IBM’s latest misguided plans, the cherry on top of all of Gates’s frustration with his inefficient and bureaucratic partners, finally became too much for him. Beginning to feel a strong premonition that the OS/2 train was going to run off the tracks alongside PS/2, he suddenly started to put some distance between IBM’s plans and Microsoft’s. After having all but ignored Windows for the past year, he started to talk about it in public again. And Tandy Trower found his tiny team’s star rising once again inside Microsoft, even as OS/2’s fell.

The first undeniable sign that a Windows rehabilitation had begun came in March of 1987, when Microsoft announced that they had sold 500,000 copies of the operating environment since its release in November of 1985. This number came as quite a surprise to technology journalists, whose own best guess would have pegged Windows’s sales at 10 percent of that figure at best. It soon emerged that Microsoft was counting all sorts of freebies and bundle deals as regular unit sales, and that even by their own most optimistic estimates no more than 100,000 copies of Windows had ever actually been installed. But no matter. For dedicated Microsoft watchers, their fanciful press release was most significant not for the numbers it trumpeted but as a sign that Windows was on again.

According to Paul and George Grayson, whose company Micrografix was among the few which embraced Windows 1, the public announcement of OS/2 and its Presentation Manager in April of 1987 actually lent Microsoft’s older GUI new momentum:

Everybody predicted when IBM announced OS/2 and PM [that] it was death for Windows developers. It was the exact opposite: sales doubled the next month. Everybody all of a sudden knew that graphical user interfaces were critical to the future of the PC, and they said, “How can I get one?”

You had better graphics, you had faster computers, you had kind of the acknowledgement that graphical user interfaces were in your future, you had the Macintosh being very successful. So you had this thing, this phenomenon called Mac envy, beginning to occur where people had PCs and they’d look at their DOS-based programs and say, “Boy, did I get ripped off.” And mice were becoming popular. People wanted a way to use mice. All these things just kind of happened at one moment in time, and it was like hitting the accelerator.

It did indeed seem that Opportunity was starting to knock — if Microsoft could deliver a version of Windows that was more compelling than the first. And Opportunity, of course, was one house guest whom Bill Gates seldom rejected. On October 6, 1987, Microsoft announced that Windows 2 would ship in not one but three forms within the next month or two. Vanilla Windows 2.03 would run on the same hardware as the previous version, while Windows/386 would be a special supercharged version made just for the 80386 CPU — a raised middle finger to IBM for refusing to let Microsoft make OS/2 an 80386-exclusive operating system.

But the most surprising new Windows product of all actually bore the name “Microsoft Excel” on the box. After struggling fruitlessly for the past two years to get people to write native applications for Windows, Microsoft had decided to flip that script by making a version of Windows that ran as part of an application. The new Excel spreadsheet would ship with what Microsoft called a “run-time” version of Windows 2, sufficient to run Excel and only Excel. When people tried Excel and liked it, they’d go out and buy a proper Windows in order to make all their computing work this way. That, anyway, was the theory.

Whether considered as Excel for Windows or Windows for Excel, Microsoft’s latest attempt to field a competitor to Lotus 1-2-3 already had an interesting history. It was in fact a latecomer to the world of MS-DOS, a port of a Macintosh product that had been very successful over the past two years.

After releasing a fairly workmanlike version of Multiplan for the Macintosh back in 1984, Microsoft had turned their attention to a more ambitious Mac spreadsheet that would be designed from scratch in order to take better advantage of the GUI. The wisdom of committing resources to such a move sparked considerable debate both inside and outside their offices, especially after Lotus announced plans of their own for a Macintosh product called Jazz.

Lotus 1-2-3 on MS-DOS was well on its way to becoming the most successful business application of the 1980s by combining a spreadsheet, a database, and a business-graphics application in one package. Now, Lotus Jazz proposed to add a word processor and telecommunications software to that collection on the Macintosh. Few gave Microsoft’s Excel much chance on the Mac against Lotus, the darling of the Fortune magazine set, a company which could seemingly do no wrong, a company which was arguably better known than Microsoft at the time and certainly more profitable. But when Jazz shipped on May 27, 1985, it was greeted with unexpectedly lukewarm reviews. It felt slow and painfully bloated, with an interface that felt more like a Baroque fugue than smooth jazz. For the first time since toppling VisiCalc from its throne as the queen of American business software, Lotus had left the competition an opening.

Excel for Mac shipped on September 30, 1985. In addition to feeling elegant, fast, and easy in contrast to the Lotus monstrosity, Microsoft’s latest spreadsheet was also much cheaper. It outdistanced its more heralded competitor in remarkably short order, quickly growing into a whale in the relatively small pond that was the Macintosh business-applications market. In December of 1985, Excel alone accounted for 36 percent of said market, compared to 9 percent for Jazz. By the beginning of 1987, 160,000 copies of Excel had been sold, compared to 10,000 copies of Jazz. And by the end of that same year, 255,000 copies of Excel had been sold — approximately one copy for every five Macs in active use.

Such numbers weren’t huge when set next to the cash cow that was MS-DOS, but Excel for the Macintosh was nevertheless a breakthrough product for Microsoft. Prior to it, system software had been their one and only forte; despite lots and lots of trying, their applications had always been to one degree or another also-rans, chasing but never catching market leaders like Lotus, VisiCorp, and WordPerfect. But the virgin territory of the Macintosh — ironically, the one business computer for which Microsoft didn’t make the system software — had changed all that. Microsoft’s programmers did a much better job of embracing the GUI paradigm than did their counterparts at companies like Lotus, resulting in software that truly felt like it was designed for the Macintosh from the start rather than ported over from another, more old-fashioned platform. Through not only Excel but also a Mac version of their Word word processor, Microsoft came to play a dominant role in the Mac applications market, with both products ranking among the top five best-selling Mac business applications more months than not during the latter 1980s. Now the challenge was to translate that success in the small export market that was the Macintosh to Microsoft’s sprawling home country, the land of MS-DOS.

On August 16, 1987, Microsoft received some encouraging news just as they were about to take up that challenge in earnest. For the first time ever, their total sales over the previous year amounted to enough to make them the biggest software company in the world, a title which they inherited from none other than Lotus, who had enjoyed it since 1984. The internal memo which Bill Gates wrote in response says everything about his future priorities. “[Lotus’s] big distinction of being the largest is being taken away,” he said, “before we have even begun to really compete with them.” The long-awaited version of Excel for PC compatibles would be launched with two important agendas in mind: to hit Lotus where it hurt, and to get Windows 2 — some form of Windows 2 — onto people’s computers.

Excel for Windows — or should we say Windows for Excel? — reached stores by the beginning of November 1987, to a press reception that verged on ecstatic. PC Magazine‘s review was typical:

Microsoft Excel, the new spreadsheet from Microsoft Corp., could be one of those milestone programs that change the way we use computers. Not only does Excel have a real chance of giving 1-2-3 its most serious competition since Lotus Development Corp. introduced that program in 1982, it could finally give the graphics interface a respectable home in the starched-shirt world of DOS.

For people who cut their teeth on 1-2-3 and have never played with a Mac, Excel looks more like a video game than a serious spreadsheet. It comes with a run-time version of Microsoft Windows, so it has cheery colors, scroll bars, icons, and menu bars. But users will soon discover the beauty of Windows. Since it treats the whole screen as graphics, you can have different spreadsheets and charts in different parts of the screen and you can change nearly everything about the way anything looks.

Excel won PC Magazine‘s award for “technical excellence” in 1987 — a year that notably included OS/2 among the field of competitors. The only thing to complain about was performance: like Windows itself, Excel ran like a dog on an 8088-based machine and sluggishly even on an 80286, requiring an 80386 to really unleash its potential.

Especially given the much greater demands Excel placed on its hardware, it would have to struggle long and hard to displace the well-entrenched Lotus 1-2-3, but it would manage to capture 12 percent of the MS-DOS spreadsheet market in its first year alone. In the process, the genius move of packaging Windows itself with by far the most exciting Windows application yet created finally caused significant numbers of people to actually start using Microsoft’s GUI, paving the road to its acceptance among even the most conservative of corporate users. Articles by starch-shirted Luddites asking what GUIs were really good for became noticeably less common in the wake of Excel, a product which answered that question pretty darn comprehensively.

Of course, Excel could never have enjoyed such success as the front edge of Microsoft’s GUI wedge had the version of Windows under which it ran not been much more impressive than the first one. Ironically, many of the most welcome improvements came courtesy of the people from the erstwhile Dynamical Systems Research, the company Microsoft had bought, at IBM’s behest, for their work on a TopView clone that could be incorporated into OS/2. After IBM gave up on that idea, most of the Dynamical folks wound up on the Windows team, where they did stellar work. Indeed, one of them, Nathan Myhrvold, would go on to became Microsoft’s chief software architect and still later head of research, more than justifying his little company’s $1.5 million purchase price all by himself. Take that, IBM!

From the user’s perspective, the most plainly obvious improvement in Windows 2 was the abandonment of Scott MacGregor’s pedantic old tiled-windows system and the embrace of a system of sizable, draggable, overlappable windows like those found on the Macintosh. For the gearheads, though, the real excitement lay in the improvements hidden under the hood of Windows/386, which ran entirely in the 80386’s “virtual” mode. Windows itself and its MS-DOS plumbing ran in one virtual machine, and each vanilla MS-DOS application the user spawned therefrom got another virtual machine of its own. This meant that as many MS-DOS applications and native Windows applications as one wished could now be run in parallel, under a multitasking model that was preemptive for the former and cooperative for the latter. The 640 K barrier still applied to each of the virtual machines and was thus still a headache, requiring the usual inefficient workarounds in the form of extended or expanded memory for applications that absolutely, positively had to have access to more memory. Still, having multiple 640 K virtual machines at one’s disposal was better than having just one.

Windows/386 was arguably the first version of Windows that wasn’t more trouble than it was worth for most users. If you had the hardware to run it, it was a very compelling product, even if the realities of the software market meant that you used it more frequently to multitask old-school MS-DOS applications than to run native Windows applications.

A Quick Tour of Windows/386 2.11

Microsoft themselves didn’t seem to entirely understand the relationship between Windows and OS/2’s Presentation Manager during the late 1980s. This version of Windows inexplicably has the “Presentation Manager” name as well on some of the disks. No wonder users were often confused as to which product was the real Microsoft GUI of the future. (Version 2.11 of Windows/386, the one we’re looking at here, was released some eighteen months after the initial 2.01 release, which I couldn’t ever manage to get running under emulation due to strange hard-drive errors. But the material differences between the two versions are relatively minor.)

Windows 2 remains less advanced than Presentation Manager in many ways, such as the ongoing lack of any concept of application installation. Without it, we’re still left to root around through the individual files on the hard disk in order to start applications.

Especially in the Windows/386 version, most of the really major improvement that came with Windows 2 are architectural enhancements that are hidden under the hood. There is, however, one glaring exception to that rule: the tiled windows are gone, replaced with a windowing system that works the way we still expect such things to work today. You can drag these windows, you can size them, and you can overlap them as you will.

The desktop concept is still lacking, but we’re making progress. Like in Windows 1, icons on the desktop only represent running applications that have been minimized. Unlike in Windows 1, these icons can now be dragged anywhere on the proto-desktop. From here, it would require only one more leap of logic to start using the icons to represent things other than running applications. Baby steps… baby steps.

Windows/386 removes some if by no means all of the sting from the 640 K barrier. Thanks to the 80386’s virtual mode, vanilla MS-DOS applications can now run in memory above the 640 K barrier, leaving the first 640 K free for Windows itself and native Windows applications. So few compelling examples of the latter existed during Windows/386’s heyday that the average user was likely to spend a lot more time running MS-DOS applications with it anyway. In this scenario, memory was ironically much less of problem than it would have been had the user attempted to run many native applications.

One of the less heralded of Microsoft’s genius marketing moves has been the use of Microsoft Excel as a sort of Trojan horse to get people using Windows. When installed on a machine without Windows, Excel also installs a “run-time” version of the operating environment sufficient only to run itself. Excel would, as Microsoft’s thinking went, get people used to a GUI and get them asking why they couldn’t use one for the other tasks they had to accomplish on the computer. “Why, as a matter of fact you can,” would go Microsoft’s answer. “You just need to buy this product called Windows.” Uptake wouldn’t be instant, but Excel did become quite successful as a standalone product, and did indeed do much to pave the way for the eventual near-universal acceptance of Windows 3.

Excel running under either a complete or a run-time version of Windows. When it appeared alongside Windows 2 in late 1987, it was by far the most sophisticated and compelling application yet made for the environment, giving the MS-DOS-using masses for the first time a proof of concept of what a GUI could mean in the real world.

Greatly improved though it was, Windows 2 didn’t blow the market away. Plenty of the same old problems remained, beginning and for many ending with the fact that seeing it at its best required a pricey 80386-based computer. In light of this, third-party software developers didn’t exactly stampede onto the Windows bandwagon. Still, having been provided with such a wonderful example in the form of Microsoft Excel of how compelling (and profitable) a Windows application done right could be, some of them did begin to commit some resources to Windows as well as vanilla MS-DOS. Throughout the life of Windows 2, Microsoft made a standard practice of giving their run-time version of it to outside application developers as well as their own, all in a bid to give people a taste of a GUI through the word processor, spreadsheet, or paint program they were going to buy anyway. To some extent at least, it worked. Some users turned that taste into a meal by buying boxed copies of Windows 2, and still more were intrigued enough to quit scoffing and start accepting that GUIs in general might truly be a better way to get their work done — if not now, then at some point in the future, when the hardware and the software had both gotten a little bit better.

By the spring of 1988, Windows was still at least an order of magnitude away from meeting the goal Bill Gates had once stated it would manage before the end of 1984: that of being installed on 90 percent of all MS-DOS computers. But, even if Windows 2 hadn’t blown anyone away, it was doing considerably better than Windows 1, and certainly seemed to have more momentum than OS/2’s as-yet-unreleased Presentation Manager. Granted, neither of these were terribly high bars to clear — and yet there was a dawning sense that Windows, six and a half years on from its birth as the humble Interface Manager, might just get the last laugh on the MS-DOS command line after all. Microsoft was already formulating plans for a Windows 3, which was coming to be seen both inside and outside the company as the pivotal version, the point where steadily improving hardware would combine with better software to break the GUI into the business-computing mainstream at long last. No, it wasn’t 1984 any more, but better late than never.

And then a new development threatened to pull the rug out from under all the progress that had been made. On March 17, 1988, Apple blindsided Microsoft by filing a lawsuit against them in federal court, alleging that the latter had stolen the former’s intellectual property by copying the “look and feel” of the Macintosh GUI. With the gauntlet thus thrown down, the stage was set for one of the most titanic legal battles in the history of the computer industry, one with the potential to fundamentally alter the very nature of the software business. At stake as well was the very existence of Windows just as it finally seemed to be getting somewhere. And as went Windows, Bill Gates was coming to believe once again, so went Microsoft. In order for both to survive, he would now have to win a two-front war: one in the marketplace, the other in the court system.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Magazine of November 10 1987, November 24 1987, December 22 1987, April 12 1988, and September 12 1989; Byte of May 1988 and July 1988; Tandy Trower’s “The Secret Origins of Windows” on the Technologizer website. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)

July 19, 2018

Choice of Games

New Hosted Game! The Harbinger’s Head by Kim Berkley

by Rachel E. Towers at July 19, 2018 04:41 PM

Hosted Games has a new game for you to play!

Visit a myth-infested 1820s Ireland. One dark (if not particularly stormy) night, you find yourself face to face with a frightening visage—or lack thereof. Though shaped like a man, the creature you’ve encountered appears to have lost his head. Worse, he seems to think you might be the one to blame! It’s 33% off until July 26th!

The Harbinger’s Head is a fantastic 46,000 word interactive horror novel by Kim Berkley, where your choices control the story. It’s entirely text-based—without graphics or sound effects—and fueled by the vast, unstoppable power of your imagination.

It’s up to you to prove your innocence and discover the true thief of the harbinger’s head before your own winds up on the chopping block!

• Play as male, female, or non-binary.
• Step into the shoes of an herbalist, schoolteacher, or lamplighter.
• Shape your personality and build your skills through the choices you make, or trust your luck at your own peril.
• Make friends—or enemies—of the various Fae creatures you’ll encounter along the way.
• Discover one of eight endings…or meet an untimely death.

Kim Berkley developed this game using ChoiceScript, a simple programming language for writing multiple-choice interactive novels like these. Writing games with ChoiceScript is easy and fun, even for authors with no programming experience. Write your own game and Hosted Games will publish it for you, giving you a share of the revenue your game produces.

These Heterogenous Tasks

The Mind of Margaret

by Sam Kabo Ashwell at July 19, 2018 01:41 AM

Awright. Going over some of the more noteworthy things I played at Go Play NW this year: The Mind of Margaret (Drew Besse) is a storygame in which the players all play different emotions (or other motivating drives) of a single character. … Continue reading

July 18, 2018

Renga in Blue

Zork I: The Death of a Thief

by Jason Dyer at July 18, 2018 09:40 PM

Cover from a C64 version, via Mobygames.

I did, indeed, discover the 20 treasures and survive. As this is my endgame post, the usual spoiler warnings apply.

. . . . .

Last time I left off, I was stuck in a Temple where I couldn’t get back up a rope. Regular readers of this blog may remember my old nemesis: missing an exit:

Temple
This is the north end of a large temple. On the east wall is an ancient inscription, probably a prayer in a long-forgotten language. Below the prayer is a staircase leading down. The west wall is solid granite. The exit to the north end of the room is through huge marble pillars.
There is a brass bell here.

I went down, to find

Egyptian Room
This is a room which looks like an Egyptian tomb. There is an ascending staircase to the west.
The solid-gold coffin used for the burial of Ramses II is here.

and assumed that was it. I was foiled partially by how I drew my map: my “down” connection was somewhat to the south of the Temple, so I conflated the two exits. The room does state it is a “north end” which suggests a south end, even though there’s no explicit mention of a south exit. Back in the Temple:

> S
Altar
This is the south end of a large temple. In front of you is what appears to be an altar. In one corner is a small hole in the floor which leads into darkness. You probably could not get back up it.
On the two ends of the altar are burning candles.
On the altar is a large black book, open to page 569.

The “small hole” drops you back into the dungeon proper. However, the difficulty isn’t over yet! The large gold coffin (which is a treasure) in the Egyptian Room is too heavy to tote down.

> D
You haven’t a prayer of getting the coffin down there.

This is a case where I got stuck on the easy part (misparsing the room and missing an exit) but immediately realized how to solve the hard part.

> PRAY
Forest
This is a forest, with trees in all directions. To the east, there appears to be sunlight.

My experience held me through here. Mainframe Zork didn’t have this puzzle but it did have one involving the matchbook where I needed to >SEND FOR BROCHURE as a literal command as it was mentioned in the text. This sort of literal-typing-what’s-in-the-text still doesn’t have a good name to it, although it probably should (anyone with some candidates?)

The mental twist needed to interpret an aside in the “running monologue” of the game as a command is a little like how a clue in a cryptic crossword often needs the solver to reinterpret a noun as a verb or an adjective as a noun. Example: “Drunk rested in bars (6)”. At first read, “bars” is a noun. The way to solve this clue is to make an anagram of “rested” (make it drunk, so to speak) in order to define the verb bars. This sort of mental shift of meaning and assumption is a common tactic for writing all puzzles, but again I don’t know of it having a particular name.

. . . . .

-> The Thief is constantly moving about.

[1 hint left.]

-> There is a high probability that he will take valuable objects (except the gold coffin) which you have seen. There is a much lower probability that he will take a nonvaluable object (again, only if you have seen it), and he may later decide to drop it.

The above text is from the “Invisiclues” for the game.

The thief is what makes the game still interesting and worth playing today. There are various narratives with “constant antagonists” (say, Kefka in Final Fantasy VI) but they still only appear at set moments. The thief can appear at any time when underground and often requires a change in intention and plans; even though he focuses on treasures, some of the treasures are used for solving puzzles. Because of the random yet universal nature of his appearances, even when he isn’t there, he is there.

The way to defeat the thief is to get a high enough score that you have good luck in battle. (In other words, gathering more treasure makes you better in combat … which makes a CRPG sort of sense, although it certainly puzzled me back in the 1980s when I first played this game.) However, since the thief is trying his best to steal treasure (either directly form your hands or from the ground) he is taking away those very points you need to eventually meet him in a final battle. It became a genuinely tense race as I tried to stockpile enough points / treasures that I would be able to survive a direct confrontation. I eventually tried it at a little less than 200 points out of 350, and after multiple tries, got to this sequence:

> kill thief with sword
The blow lands, making a shallow gash in the thief’s arm!
The thief neatly flips your sword out of your hands, and it drops to the floor.

> kill thief with sword
You don’t have the sword.

> get sword
Taken.
A quick thrust pinks your left arm, and blood starts to trickle down.

…[about five more exchanges of blows go here]…

> kill thief with sword
Your sword misses the thief by an inch.
The thief stabs nonchalantly with his stiletto and misses.

> kill thief with sword
A savage blow on the thigh! The thief is stunned but can still fight!
The thief stabs nonchalantly with his stiletto and misses.

> kill thief with sword
A good slash, but it misses the thief by a mile.
You parry a lightning thrust, and the thief salutes you with a grim nod.

> kill thief with sword
It’s curtains for the thief as your sword removes his head.
Almost as soon as the thief breathes his last breath, a cloud of sinister black fog envelops him, and when the fog lifts, the carcass has disappeared.
As the thief dies, the power of his magic decreases, and his treasures reappear:
A stiletto
A trunk of jewels
A jewel-encrusted egg, with a golden clockwork canary
A crystal trident
The chalice is now safe to take.
Your sword is no longer glowing.

After the constant antagonism – and truly constant, not just once in a while – this was a deeply satisfying moment. While coincidence / luck, I especially appreciated the salute a turn before the thief died.

There is one major downside, which you can see from the items dropped. The “jeweled egg” is one of the early treasures you can find, but you can’t open it – however, the thief, with nimbler fingers than yours, can. The golden clockwork canary counts as a separate treasure.

In other words, if you haven’t passed the egg off to the thief by this point in the game, you will not be able to win the game, and likely not discover this fact until the very end. It’s perhaps even crueler than the usual Cruel because it’s unclear that opening the egg is even a required action, so you have to first conceive there might be a puzzle in a first place and then realize your method for solving it is already gone.

My complete map of the underground of Zork I. Click for a larger PDF file version.

. . . . .

This happens on death, at least once you’re far enough in the game:

As you take your last breath, you feel relieved of your burdens. The feeling passes as you find yourself before the gates of Hell, where the spirits jeer at you and deny you entry. Your senses are disturbed. The objects in the dungeon appear indistinct, bleached of color, even unreal.

Let’s pause for a moment with that last sentence.

The objects in the dungeon appear indistinct, bleached of color, even unreal.

Why did this make me stop and admire? Here’s the sentence de-evolved just a step:

The objects in the dungeon appear indistinct and bleached of color.

While “indistinct” and “bleached of color” are strong, they’re essentially descriptive. There’s no sense of the mystical. It describes the events directly.

Change back to “indistinct, bleached of color, even unreal” and the effect (for me at least) returns. “Even” is a curious word choice here. It can mean “free from variation” or essentially “flat” but also “this outlier is included” (he ate all the candies, even the sour ones). Indistinct and bleached of color are already unreal, so it’s on the same “flat” level, but the specific phrasing suggests the unrealness is an outlier. So the unreal is both congruent with the bleaching of color but also discordant. The unreal is suggested in a way that is … skeptical, perhaps?

. . . . .

Back cover for the Japanese Playstation version of Zork I, via Mobygames.

> ENTER TOMB
Inside the Barrow
As you enter the barrow, the door closes inexorably behind you. Around you it is dark, but ahead is an enormous cavern, brightly lit. Through its center runs a wide stream. Spanning the stream is a small wooden footbridge, and beyond a path leads into a dark tunnel. Above the bridge, floating in the air, is a large sign. It reads: All ye who stand before this bridge have completed a great and perilous adventure which has tested your wit and courage. You have mastered ZORK: The Great Underground Empire.

Your score is 350 (total of 350 points), in 711 moves.
This gives you the rank of Master Adventurer.

This is the first game in my series that you can still buy; it’s in the Zork Anthology sold on Steam and GOG. (There’s also a multitude of online versions.) It really is worth a try if you’ve never experienced it.

I’m not 100% sure on my schedule after this, but I will likely take down a simple TRS-80 game or two next and then dive into Haunt, one of the strangest of all the mainframe games.

July 17, 2018

Emily Short

My Lady’s Choosing (Kitty Curran, Larissa Zageris)

by Emily Short at July 17, 2018 02:40 PM

Screen Shot 2018-04-29 at 2.43.30 PM.pngMy Lady’s Choosing is a branching romance novel — or, arguably, spoof of romance novels. It begins with the heroine as a just-about-penniless lady’s companion, then immediately introduces her to two eligible bachelors and one wealthy and outrageous female friend.

From there, we are offered a buffet of standard tropes. There’s your obligatory Scottish hero with a castle and a lot of dialect-speaking servants who’ve known him since his youth. There’s a Mr Darcy-minus-the-serial-number whose estate is called (I am not making this up) Manberley. There’s a Jane Eyre plot strand with the brooding Man With A Past and a surviving child (and an optional Distraction Vicar if you want to go that way). There’s a subplot with Napoleonic spies and another subplot involving raising the lost tomb of Hathor in Egypt. There’s a side character who is a callback to a side character in Emma, and a sinister servant who owes a lot to Mrs. Danvers, and an obligatory call-out to that summer on the shores of Lake Geneva with Lord Byron. The encyclopedic approach to tropes reminded me of Tough Guide to Fantasyland, as transported to another genre.

The very first seeming choice is an intentional non-option: you’re offered two choices, but immediately told that the second would lead to living on the streets in shame and poverty. Life is hard for penniless heroines in the 19th century, it admits, with little regard for the fourth wall.

Then the choice structure is funneled just enough that we have to encounter two of the heroes and the female romance option. Past that point, it opens up very broad, Sorting Hat-style.

MyLadysChoosing.jpeg

This diagram: is spoilery if you zoom in, intentionally conflates some nodes where there is no branching, and is known to be missing some branches, especially in the Egypt (purple) and Scotland (green) storylines, when I wasn’t taking as good notes. Consider it more an impression than a perfect map. Colors indicate which lover you’re currently making a top priority, and grey nodes represent points where you’re not committed.

After your first few choices, there’s usually one primary romantic interest at the moment, and you’re described as being overwhelmingly, distractingly attracted to that person: a standard romance novel trope that becomes funnier or at least different in tone when our heroine shifts her attention so frequently. It’s hard to get deeply invested in any of these characters or to feel that my protagonist is deeply in love with them, given the rapid rotation of options.

However, you pretty much always have the option to opt out of your involvement with that person and go do something else with someone else. In one playthrough, I was fleetingly enamored of the story’s Darcy-like, but then he got embroiled in a family scandal and looked like losing all his money, so I ran off to Egypt with a lesbian archaeologist instead.

As far as I can tell, every single ending of the story could be considered happy, if you’re inclined that way. The majority involve a life of straight bliss with a major or minor character. A few set you up with a woman, or with solitary freedom to play the field. A couple of the endings are supernatural in nature. But they’re all framed as positive; there are no bad endings that I saw.

As a story, therefore, any given playthrough tends not to be dramatically well-shaped: there may be odd false starts before the meat of your adventure begins, or the ending you find may be a strange pendant to a plot that was mostly about something and someone completely different. And even if you do pick a course of action early and stick with it to the end — for instance, pursuing Mac the Scotsman from the first scene and never leaving him for other adventures — the result is somewhat impressionistic. You have a few sex scenes with your chosen lover, a couple of plot twists — but no slow builds, no suspense. It’s as though someone had saved the key passages out of a 300-page historical and left the rest behind.

But because pretty much all of the plotlines are drawn from familiar sources, I rarely felt like I didn’t know where something was going. The only question was whether I wanted to stay on that track towards its inevitable conclusion, or whether I wanted to switch. This book is not really about being a romance novel heroine. It’s about being a romance novel reader.

See also:

  • This 2008 Gamasutra article I wrote about choice structures in IF romances past, noting that they often take the heroine’s desire for granted but leave up to her what she wants to do about it
  • Tara Reed’s Love Him Not, which I reviewed back when it was still under a different title. That book is structurally very, very different from this one, which makes for an interesting point of comparison
  • Choose Your Erotica, a review I wrote of a few more sex-focused interactive stories

Classic Adventure Solution Archive

CASA Update - 19 new game entries, 3 new solutions, 2 new maps, 1 new hints

by Gunness at July 17, 2018 10:54 AM

Image
I hope that everybody has made it safely through the World Cup - and congratulations to our French friends for the thrilling finals!

Around these parts it's been unusually hot for weeks on end, so perhaps some light reading might be preferable to straining your noodle over puzzles. Shaun McClure, bonafide Spectrum artist and author of two books on Spectrum gaming, is working on a new book on Speccy text adventures. And he's been a really nice bloke and allowed us to reproduce some of the interviews for said book. So here for your reading please are interviews with Charles Cecil, Scott Adams and Mel Croucher. The latter might not have a huge roster of adventure games, but he's had a fascinating career all the same. Thanks to Shaun for allowing us to use his interviews. Let's see if there might be more happening on that front.

Contributors: Sylvester, iamaran, Strident, Gunness

July 15, 2018

Emily Short

Mid-July Link Assortment

by Emily Short at July 15, 2018 03:40 PM

July 15 is the next meeting of the Boston/Cambridge PR-IF.

July 21st, 3-5 p.m. at Mad City Coffee in Columbia, the Baltimore/DC group meets to discuss The Wand.

July 31st in Canterbury (UK) there is a session on how to build escape rooms for libraries.

Gothic Novel Jam is a jam for games or works inspired by the gothic novel in any fashion, and is running throughout July. IF and related narrative games are welcome.

Screen Shot 2018-06-10 at 4.20.43 PM.pngIntroComp is now accepting intents to enter. IntroComp is a competition in which you can submit just an excerpt of an unfinished interactive fiction game, and receive feedback from players about what they liked or didn’t like about it. If you’d like to participate as an author, register with the site immediately (this closes June 30, so today). Games themselves must be submitted by July 31 and judging will occur during August.

Entry registration and prize donation for IF Comp are now open as well, if you’re expecting to have something more complete in the near future.

August 4 is the next meeting of the SF Bay IF Meetup.

August 15, London IF Meetup hears from James Wallis on tabletop RPGs and storygames. This is a field where some of the most interesting narrative design is happening right now, and Wallis is an expert. As always, the event is free and there are drinks and hanging-out afterward.

inkle studios has announced ink jam, a jam for people writing in ink, running August 31-September 3.

November 10-11, AdventureX will return, this time at the British Library. AdventureX is a conference focused on narrative rich games, whether those are mobile or desktop, text-based or graphical; it’s grown significantly in size and professionalism over the last couple of years, and last year pretty definitively outgrew its previous venue. I am mentioning this well in advance because they’ve mentioned that tickets will be cheaper for early bird buyers — so it’s something to keep an eye on if you think you’ll want to go.

July 13, 2018

Zarf Updates

A question about Magic the Gathering rules timing

by Andrew Plotkin ([email protected]) at July 13, 2018 11:10 PM

Not a question about card effect timing, but about the timing of the development of the rules!
This nifty article just came orbiting through my Twitter stream, about the history of Magic's rules. It has some delightful quotes:
The timing of spells is occasionally rather tricky. -- MtG rules, Revised, April 1994
Usually, figuring out what happens first in Magic is pretty easy. -- MtG rules, Fourth Edition, April 1995
However, I want to ask about this claim from the blog post:
Which takes us to the end of our journey, 5th edition. 5th was released in march 1997, and at this time professional magic tournaments was thriving. Hence, any ambiguity of the previous rules had been cleaned up or removed. The rules for timing however were more complex than ever.
-- Magnus de Laval, blog post, August 2014
You know what else happened that month? The release of the first big MtG videogame. (MicroProse, March 1997.)
The videogame included most of the cards through Fourth Edition, but operated under the brand-new 5thE rules:
That's because in Shandalar, the rules used are the official interpretations supplied by Wizards of the Coast. These up-to-date rules are ruthlessly enforced, and there is no room for negotiation, argument, intimidation of your opponent, or weaseling your way through loopholes.
Tough luck, all you whiny rules lawyers.
This version of Magic: The Gathering enforces the official Fifth Edition rules.
-- MtG game manual, MicroProse, 1997
I'm not sure when development started on the game. But in 1996 and 1997, the WOTC designers must fielded a steady stream of haggard MicroProse developers asking "But how do you resolve this corner case? How do the timing rules really work?"
My long-held theory is that the clarifications and cleanups of 5thE are not so much because of the tournaments, but rather because of the effort of making the videogame behave consistently.
If you've played any modern board/card game with a computer implementation, like Ascension or RFTG, you know that the computer version quickly becomes "the real version" in your head. The easiest way to answer rules questions at Game Night is to say "The videogame does it this way." So my gut feeling is that MtG must have been the first big example of this.
But I don't know for sure. I only played a bit of MtG in the earliest days; I was never involved with the tournament scene.
Can anybody say more about this development history?
The next MtG rules update, Sixth Edition (April 1999), completely revamped the timing algorithm. Which we can fairly call an algorithm at that point! 6thE spell resolution uses a "stack", in the programming sense. So the computer paradigm obviously had an influence on its development. But that's a couple of years after the change I'm asking about.

The Digital Antiquarian

Doing Windows, Part 4: The Rapprochement

by Jimmy Maher at July 13, 2018 04:41 PM

We’ve seen how the pundits had already started speculating like crazy long before the actual release of IBM’s TopView, imagining it to be the key to some Machiavellian master plan for seizing complete control of the personal-computer market. But said pundits were giving IBM a bit more credit than perhaps was due. The company nicknamed Big Blue was indeed a big, notoriously bureaucratic place, and that reality tended to interfere with their ability to carry out any scheme, Machiavellian or otherwise, with the single-minded focus of a smaller company. There doubtless were voices inside IBM who could imagine using TopView as a way of shoving Microsoft aside, and had the product been a roaring success those voices doubtless would have been amplified. Yet thanks to the sheer multiplicity of voices IBM contained, the organization always seemed to be pulling in multiple directions at once. Thus even before TopView hit the market and promptly fizzled, a serious debate was taking place inside IBM about the long-term future direction of their personal computers’ system software. This particular debate didn’t focus on extensions to MS-DOS — not even on an extension like TopView which might eventually be decoupled from the unimpressive operating system underneath it. The question at hand was rather what should be done about creating a truly holistic replacement for MS-DOS. The release of a new model of IBM personal computer in August of 1984 had given that question new urgency.

The PC/AT, the first really dramatic technical advance on the original IBM PC, used the new Intel 80286 processor in lieu of the older machine’s 8088. The 80286 could function in two modes. In “real” mode, grudgingly implemented by Intel’s engineers in the name of backward compatibility, it essentially was an 8088, with the important difference that it happened to run much faster. Otherwise, though, it shared most of the older chip’s limitations, most notably the ability to address only 1 MB of memory — the source, after the space reserved for system ROMs and other specialized functions was subtracted, of the original IBM PC’s limitation to 640 K of RAM. It was only in the 80286’s “protected” mode that the new chip’s full capabilities were revealed. In this mode, it could address up to 16 MB of memory, and implemented hardware memory protection ideal for the sort of modern multitasking operating system that MS-DOS so conspicuously was not.

The crux of IBM’s dilemma was that MS-DOS, being written for the 8088, could run on the 80286 only in real mode, leaving most of the new chip’s capabilities unused. Memory beyond 640 K could thus still be utilized only via inefficient and ugly hacks, even on a machine with a processor that, given a less clueless operating system, was ready and willing to address up to 16 MB. IBM therefore decided that sooner or later — and preferably sooner — MS-DOS simply had to go.

This much was obvious. What was less obvious was where this new-from-the-ground-up IBM operating system should come from. Over months of debate, IBM’s management broke down into three camps.

One camp advocated buying or licensing Unix, a tremendously sophisticated and flexible operating system born at AT&T’s Bell Labs. Unix was beloved by hackers everywhere, but remained under the thumb of AT&T, who licensed it to third parties with the wherewithal to pay for it. Ironically, Microsoft had had a Unix license for years, using it to create a product they called Xenix, by far the most widely used version of Unix on microcomputers during the early 1980s. Indeed, their version of Xenix for the 80286 had of late become the best way for ambitious users not willing to settle for MS-DOS to take full advantage of the PC/AT’s capabilities. Being an already extant operating system which Microsoft among others had been running on high-end microcomputers for years, a version of Unix for the business-microcomputing masses could presumably be put together fairly quickly, whether by Microsoft or by IBM themselves. Yet Unix, having been developed with bigger institutional computers in mind, was one heck of a complicated beast. IBM feared abandoning MS-DOS, with its user-unfriendliness born of primitiveness, only to run afoul of Unix’s user-unfriendliness born of its sheer sophistication. Further, Unix, having been developed for text-only computers, wasn’t much good for graphics — and thus not much good for GUIs.1 The conventional wisdom held it to be an operating system better suited to system administrators and power users than secretaries and executives.

A second alternative was for IBM to make a new operating system completely in-house for their personal computers, just as they always had for their mainframes. They certainly had their share of programmers with experience in modern system software, along with various projects which might become the basis of a new microcomputer operating system. In particular, the debaters returned over and over again to one somewhat obscure model in their existing personal-computer lineup. Released in late 1983, the 3270 PC came equipped with a suite of additional hardware and software that let it act as a dumb terminal for a mainframe, while also running — simultaneously with multiple mainframe sessions, if the user wished — ordinary MS-DOS software. To accomplish that feat, IBM’s programmers had made a simple windowing environment that could run MS-DOS in one window, mainframe sessions in others. They had continued to develop the same software after the 3270 PC’s release, yielding a proto-operating system with the tentative name of Mermaid. The programmers who created Mermaid would claim in later years that it was far more impressive than either TopView or the first release of Microsoft Windows; “basically, in 1984 or so,” says one, “we had Windows 3.0.” But there was a big difference between Mermaid and even the latter, relatively advanced incarnation of Windows: rather than Mermaid running under MS-DOS, as did Windows, MS-DOS could run under Mermaid. MS-DOS ran, in other words, as just one of many potential tasks within the more advanced operating system, providing the backward compatibility with old software that was considered such a necessary bridge to any post-MS-DOS future. And then, on top all these advantages, Mermaid already came equipped with a workable GUI. It seemed like the most promising of beginnings.

By contrast, IBM’s third and last alternative for the long-term future initially seemed the most unappetizing by far: to go back to Microsoft, tell them they needed a new operating system to replace MS-DOS, and ask them to develop it with them, alongside their own programmers. There seemed little to recommend such an approach, given how unhappy IBM was already becoming over their dependency on Microsoft — not to mention the way the latter bore direct responsibility for the thriving and increasingly worrisome clone market, thanks to their willingness to license MS-DOS to anyone who asked for it. And yet, incredibly, this was the approach IBM would ultimately choose.

Why on earth would IBM choose such a path? One factor might have been the dismal failure of TopView, their first attempt at making and marketing a piece of microcomputer system software single-handedly, in the spring of 1985. Perhaps this really did unnerve them. Still, one has to suspect that there was more than a crisis of confidence behind IBM’s decision to go from actively distancing themselves from Microsoft to pulling the embrace yet tighter in a matter of months. In that light, it’s been reported that Bill Gates, getting wind of IBM’s possible plans to go it alone, threatened to jerk their existing MS-DOS license if they went ahead with work on a new operating system without him. Certainly IBM’s technical rank and file, who were quite confident in their own ability to create IBM’s operating system of the future and none too happy about Microsoft’s return to the scene, widely accepted this story at the time. “The bitterness was unbelievable,” remembers one. “People were really upset. Gates was raping IBM. It’s incomprehensible.”

Nevertheless, on August 22, 1985, Bill Gates and Bill Lowe, the latter being the president of IBM’s so-called “Entry Systems Division” that was responsible for their personal computers, signed a long-term “Joint Development Agreement” in which they promised to continue to support MS-DOS on IBM’s existing personal computers and to develop a new, better operating system for their future ones. All those who had feared that TopView represented the opening gambit in a bid by IBM to take complete control of the business-microcomputing market could breathe a sign of relief. “We are committed to the open-architecture concept,” said Lowe, “and recognize the importance of this to our customers.” The new deal between the two companies was in fact far more ironclad and more equal than the one that had been signed before the release of the original IBM PC. “For Microsoft,” wrote the New York TImes‘s business page that day, “the agreement elevates it from a mere supplier to IBM, with the risk that it could one day be cut off, into more of a partner.” True equal partner with the company that in the eyes of many still was computing in the abstract… Microsoft was moving up in the world.

The public was shown only the first page or two of the new agreement, full of vague reassurances and mission statements. Yet it went on for many more pages after that, getting deep into the weeds of an all-new operating system to be called CP-DOS. (Curiously, the exact meaning of the acronym has never surfaced to my knowledge. “Concurrent Processing” would be my best guess, given the project’s priorities.) CP-DOS was to incorporate all of the sophistication that was missing from MS-DOS, including preemptive multitasking, virtual memory, the ability to address up to 16 MB of physical memory, and a system of device drivers to insulate applications from the hardware and insulate application programmers from the need to manually code up support for every new printer or video card to hit the market. So far, so good.

But this latest stage of an unlikely partnership would prove a very different experience for Microsoft than developing the system software for the original IBM PC had been. Back in 1980 and 1981, IBM, pressured for time, had happily left the software side of things entirely to Microsoft. Now, they truly expected to develop CP-DOS as partners with them, expected not only to write the specifications for the new operating system themselves but to handle some of the coding themselves as well. Two radically different corporate cultures clashed from the start. IBM, accustomed to carrying out even the most mundane tasks in bureaucratic triplicate, was appalled at the lone-hacker model of software development that still largely held sway at Microsoft, while the latter’s programmers held their counterparts in contempt, judging them to be a bunch of useless drones who never had an original thought in their lives. “There were good people” at IBM, admits one former Microsoft employee. But then, “there were a lot of not-so-good people also. That’s not Microsoft’s model. Microsoft’s model is only good people. If you’re not good, you don’t stick around.” Neal Friedman, a programmer on the CP-DOS team at Microsoft:

The project was extremely frustrating for people at Microsoft and for people at IBM too. It was a clash of two companies at opposite ends of the spectrum. At IBM, things got done very formally. Nobody did anything on their own. You went high enough to find somebody who could make a decision. You couldn’t change things without getting approval from the joint design-review committee. It took weeks even to fix a tiny little bug, to get approval for anything.

IBM measured their programmers’ productivity in the number of lines of code they could write per day. As Bill Gates and plenty of other people from Microsoft tried to point out, this metric said nothing about the quality of the code they wrote. In fact, it provided an active incentive for programmers to write bloated, inefficient code. Gates compared the project to trying to build the world’s heaviest airplane.

A joke memo circulated inside Microsoft, telling the story of an IBM rowing team that had lost a race. IBM, as was their wont, appointed a “task force” to analyze the failure. The bureaucrats assigned thereto discovered that the IBM team had had eight people steering and one rowing, while the other team had had eight people rowing and one steering. Their recommendation? Why, the eight steerers should simply get the one rower to row harder, of course. Microsoft took to calling IBM’s software-development methodology the “masses-of-asses” approach.

But, as only gradually became apparent to Microsoft’s programmers, Bill Gates had ceded the final authority on what CP-DOS should be and how it should be implemented to those selfsame masses of asses. Scott MacGregor, the Windows project manager during 1984, shares an interesting observation that apparently still applied to the Gates of 1985 and 1986:

Bill sort of had two modes. For all the other [hardware manufacturers], he would be very confident and very self-assured, and feel very comfortable telling them what the right thing to do was. But when he worked with IBM, he was always much more reserved and quiet and humble. It was really funny because this was the only company he would be that way with. In meetings with IBM, this change in Bill was amazing.

In charming or coercing IBM into signing the Joint Development Agreement, Gates had been able to perpetuate the partnership which had served Microsoft so well, but the terms turned out to be perhaps not quite so equal as they first appeared: he had indeed given IBM final authority over the new operating system, as well as agreeing that the end result would belong to Big Blue, not (as with MS-DOS) to Microsoft. As work on CP-DOS began in earnest in early 1986, a series of technical squabbles erupted, all of which Microsoft was bound to lose.

One vexing debate was over the nature of the eventual CP-DOS user interface. Rather than combining the plumbing of the new operating system and the user interface into one inseparable whole, IBM wanted to separate the two. In itself, this was a perfectly defensible choice; successful examples of this approach abound in computing history, from Unix and Linux’s X Windows to the modern Macintosh’s OS X. And of course this was an approach which Microsoft and many others had already taken in building GUI environments to run on top of MS-DOS. So, fair enough. The real disagreements started only when IBM and Microsoft started to discuss exactly what form CP-DOS’s preferred user interface should take. Unsurprisingly, Microsoft wanted to adapt Windows, that project in which they had invested so much of their money and reputation for so little reward, to run on top of CP-DOS instead of MS-DOS. But IBM had other plans.

IBM informed Microsoft that the official CP-DOS user interface at launch time  was to be… wait for it… TopView. The sheer audacity of the demand was staggering. After developing TopView alone and in secret, cutting out their once and future partners, IBM now wanted Microsoft to port it to the new operating system the two companies were developing jointly. (Had they been privy to it, the pundits would doubtless have taken this demand as confirmation of their suspicion that at least some inside IBM had intended TopView to have an existence outside of its MS-DOS host all along.)

“TopView is hopeless,” pleaded Bill Gates. “Just let it die. A modern operating system needs a modern GUI to be taken seriously!” But IBM was having none of it. When they had released TopView, they had told their customers that it was here to stay, a fundamental part of the future of IBM personal computing. They couldn’t just abandon those people who had bought it; that would be contrary to the longstanding IBM ethic of being the safe choice in computing, the company you invested in when you needed stability and continuity above all else. “But almost nobody bought TopView in the first place!” howled Gates. “Why not just give them their money back if it’s that important to you?” IBM remained unmoved. “Do a good job with a CP-DOS TopView”, they said, “and we can talk some more about a CP-DOS Windows with our official blessing.”

Ever helpful, IBM referred Microsoft to six programmers in Berkeley, California, who called themselves Dynamical Systems Research, who had recently come to them with a portable re-implementation of TopView which was supposedly one-quarter the size and ran four to ten times faster. (That such efficiency gains over the original version were even possible confirmed every one of Microsoft’s prejudices against IBM’s programmers.) In June of 1986, Steve Ballmer duly bought a plane ticket for Berkeley, and two weeks later Microsoft bought Dynamical for $1.5 million. And then, a month after that event, IBM summoned Gates and Ballmer to their offices and told them that they had changed their minds; there would now be no need for a TopView interface in CP-DOS. IBM’s infuriating about-face seemingly meant that Microsoft had just thrown away $1.5 million. (Luckily for them, in the end they would get more than their money’s worth out of the programming expertise they purchased when they bought Dynamical, despite never doing anything with the alternative TopView technology; more on that in a future article.)

The one good aspect of this infuriating news was that IBM had at last decided that they and Microsoft should write a proper GUI for CP-DOS. Even this news wasn’t, however, as good as Microsoft could have wished: said GUI wasn’t to be Windows, but rather a new environment known as the Presentation Manager, which was in turn to be a guinea pig for a new bureaucratic monstrosity known as the Systems Application Architecture. SAA had been born of the way that IBM had diversified since the time when the big System/360 mainframes had been by far the most important part of their business. They still had those hulking monsters, but they had their personal computers now as well, along with plenty of machines in between the two extremes, such as the popular System/38 range of minicomputers. All of these machines had radically different operating systems and operating paradigms, such that one would never guess that they all came from the same company. This, IBM had decided, was a real problem in terms of technological efficiency and marketing alike, one which only SAA could remedy. They described the latter as “a set of software interfaces, conventions, and protocols — a framework for productively designing and developing applications with cross-system dependency.” Implemented across IBM’s whole range of computers, it would let programmers transition easily from one platform to another thanks to a consistent API, and the software produced with it would all have a consistent, distinctively “IBM” look and feel, conforming to a set-in-stone group of interface guidelines called Common User Access.

SAA and CUA might seem a visionary scheme from the vantage point of our own era of widespread interoperability among computing platforms. In 1986, however, the devil was very much in the details. The machines which SAA and CUA covered were so radically different in terms of technology and user expectations that a one-size-fits-all model couldn’t possibly be made to seem anything but hopelessly compromised on any single one of them. CUA in particular was a pedant’s wet dream, full of stuff like a requirement that every single dialog box had to have buttons which said “OK = Enter” and “ESC = Cancel,” instead of just “OK” and “Cancel.” “Surely we can expect people to figure that out without beating them over the head with it every single time!” begged Microsoft.

For a time, such pleas fell on deaf ears. Then, as more and more elements descended from IBM’s big computers proved clearly, obviously confusing in the personal-computing paradigm, Microsoft got permission to replace them with elements drawn from their own Windows. The thing just kept on getting messier and messier, a hopeless mishmash of two design philosophies. “In general, Windows and Presentation Manager are very similar,” noted one programmer. “They only differ in every single application detail.” The combination of superficial similarity with granular dissimilarity could only prove infuriating to users who went in with the reasonable expectation that one Microsoft-branded GUI ought to work pretty much the same as another.

Yet the bureaucratic boondoggle that was SAA and CUA wasn’t even the biggest bone of contention between IBM and Microsoft. That rather took the form of one of the most basic issues of all: what CPU the new operating system should target. Everyone agreed that the old 8088 should be left in the past along with the 640 K barrier it had spawned, but from there opinions diverged. IBM wanted to target the 80286, thus finally providing all those PC/ATs they had already sold with an operating system worthy of their hardware. Microsoft, on the other hand, wanted to skip the 80286 and target Intel’s very latest and greatest chip, the 80386.

The real source of the dispute was that same old wellspring of pain for anyone hoping to improve upon MS-DOS: the need to make sure that the new-and-improved operating system could run old MS-DOS software. Doing so, Bill Gates pointed out, would be far more complicated from the programmer’s perspective and far less satisfactory from the user’s perspective with the 80286 than it would with the 80386. To understand why, we need to look briefly at the historical and technical circumstances behind each of the chips.

It generally takes a new microprocessor some time to go from being available for purchase on its own to being integrated into a commercial computer. Thus the 80286, which first reached the mass market with the PC/AT in August of 1984, first reached Intel’s product catalog in February of 1982. It had largely been designed, in other words, before the computing ecosystem spawned by the IBM PC existed. Its designers had understood that compatibility with the 8088 might be a good thing to have to go along with the capabilities of the chip’s new protected mode, but had seen the two things as an either/or proposition. You would either boot the machine in real mode to run a legacy 8088-friendly operating system and its associated software, or you’d boot it in protected mode to run a more advanced operating system. Switching between the two modes required resetting the chip — a rather slow process that Intel had anticipated happening only when the whole machine in which it lived was rebooted. The usage scenario which Intel had most obviously never envisioned was the very one which IBM and Microsoft were now proposing for CP-DOS: an operating system that constantly switched on the fly between protected mode, which would be used for running the operating system itself and native applications written for it, and real mode, which would be used for running MS-DOS applications.

But the 80386, which entered Intel’s product catalog in September of 1985, was a very different beast, having had the chance to learn from the many Intel-based personal computers which had been sold by that time. Indeed, Intel had toured the industry before finalizing their plans for their latest chip, asking many of its movers and shakers — a group which prominently included Microsoft — what they wanted and needed from a third-generation CPU. The end result offered a 32-bit architecture to replace the 16 bits of the 80286, with the capacity to address up to 4 GB of memory in protected mode to replace the 16 MB address space of the older chip. But hidden underneath the obvious leap in performance were some more subtle features that were if anything even more welcome to programmers in Microsoft’s boat. For one thing, the new chip could be switched between real mode and protected mode quickly and easily, with no need for a reset. And for another, Intel added a third mode, a sort of compromise position in between real and protected mode that was perfect for addressing exactly the problems of MS-DOS compatibility with which CP-DOS was doomed to struggle. In the new “virtual” mode, the 80386 could fool software into believing it was running on an old 8088-based machine, including its own virtual 1 MB memory map, which the 80386 automatically translated into the real machine’s far more expansive memory map.

The power of the 80386 in comparison to the 8088 was such that a single physical 80386-based machine should be able to run a dozen or more virtual MS-DOS machines in parallel, should the need arise — all inside a more modern, sophisticated operating system like the planned CP-DOS. The 80386’s virtual mode really was perfect for Microsoft’s current needs — as it ought to have been, given that Microsoft themselves were largely responsible for its existence. It offered them a chance that doesn’t come along very often in software engineering: the chance to build a modern new operating system while maintaining seamless compatibility with the old one.

Some reports have it that Bill Gates, already aware that the 80386 was coming, had tried to convince IBM not to build the 80286-based PC/AT at all back in 1984, had told them they should just stay with the status quo until the 80386 was ready. But even in 1986, the 80386 according to IBM was just too far off in the future as a real force in personal computing to become the minimum requirement for CP-DOS. They anticipated taking a leisurely two-and-a-half years or so, as they had with the 80286, to package the 80386 into a new model. Said model thus likely wouldn’t appear until 1988, and its sales might not reach critical mass until a year or two after that. The 80386 was, IBM said, simply a bridge too far for an operating system they wanted to release by 1987. Besides, in light of the IBM Safeness Doctrine, they couldn’t just abandon those people who had already spent a lot of money on PC/ATs under the assumption that it was the IBM personal computer of the future.

“Screw the people with ATs,” was Gates’s undiplomatic response. “Let’s just make it for the 386, and they can upgrade.” He gnashed his teeth and raged, but IBM was implacable. Instead of being able to run multiple MS-DOS applications in parallel on CP-DOS, almost as if they had been designed for it from the start, Microsoft would be forced to fall back on using their new operating system as little more than a launcher for the old whenever the user wished to run an MS-DOS application. And it would, needless to say, be possible to run only one such application at a time. None of that really mattered, said IBM; once people saw how much better CP-DOS was, developers would port their MS-DOS applications over to it and the whole problem of compatibility would blow away like so much smoke. Bill Gates was far less sanguine that Microsoft and IBM could so easily kill their cockroach of an operating system. But in this as in all things, IBM’s decision was ultimately the law.

Here we see the CP-DOS (later OS/2 1.x) physical memory map. A single MS-DOS application can be loaded into the space below 1 MB — more specifically, into the box labeled “3.x” above. (MS-DOS 3 was the current version at the time that IBM and Microsoft were working on CP-DOS.) Because MS-DOS applications must run in the processor’s real mode, accessing physical rather than virtual memory addresses, only one application can be loaded into this space — and only this space! — at a time. Native CP-DOS applications live in the so-called “high memory” above the 1 MB boundary — more specifically, in the space labelled “protected-mode” in the diagram above. As many of these as the user wishes can be loaded at one time up there. Had IBM agreed to build CP-DOS for the 80386 rather than the 80286, it would have been possible to use that processor’s “virtual” mode to trick MS-DOS applications into believing they were running in real mode underneath the 640 K boundary, regardless of where they actually lived in memory. This would have allowed the user to run multiple MS-DOS applications alongside multiple native CP-DOS applications. In addition, an 80386 CP-DOS would have been able to address up to 4 GB of memory rather than being limited to 16 MB.

Microsoft’s frustration only grew when IBM’s stately timetable for the 80386 was jumped by the increasingly self-confident clone makers. In September of 1986, Compaq, the most successful and respected clone maker of all, shipped the DeskPro 386, the very first MS-DOS-compatible machine to use the chip. Before the end of the year, several other clone makers had announced 80386-based models of their own in response. It was a watershed moment in the slow transformation of business-oriented microcomputing from an ecosystem where IBM blazed the trails and a group of clone makers copied their innovations to one where many equal players all competed and innovated within an established standard for software and hardware which existed independently of all of them. A swaggering Compaq challenged IBM to match the DeskPro 386 within six months “or be supplanted as the market’s standard-setter.” Michael Swarely, Compaq’s marketing guru, was already re-framing the conversation in ways whose full import would only gradually become clear over the years to come:

We believe that an industry standard that has been established for software for the business marketplace is clearly in place. What we’ve done with the DeskPro 386 is innovate within that existing standard, as opposed to trying to go outside the standard and do something different. IBM may or may not enter the [80386] marketplace at some point in the future. The market will judge what IBM brings in the same way that it judges any other manufacturer’s new products. They have to live within the market’s realities. And the reality is that American business has made an enormous investment in an industry standard.

More than ever before, IBM was feeling real pressure from the clone makers. Their response would give the lie to all of their earlier talk of an open architecture and their commitment thereto.

IBM had already been planning a whole new range of machines for 1987, to be called the PS/2 line. Those plans had originally not included an 80386-based machine, but one was hastily added to the lineup now. Yet the appearance of that machine was only one of the ways in which the PS/2 line showed plainly that clone makers like Compaq were making IBM nervous with their talk of a “standard” that now had an existence independent from the company that had spawned it. IBM planned to introduce with the PS/2 line a new type of system bus for hardware add-ons, known as the Micro Channel Architecture. Whatever its technical merits, which could and soon would be hotly debated, MCA was clearly designed to cut the clone makers off at the knees. Breaking with precedent, IBM wrapped MCA up tight inside a legal labyrinth of patents, meaning that anyone wishing to make a PS/2 clone or even just an MCA-compatible expansion card would have to negotiate a license and pay for the privilege. If IBM got their way, the curiously idealistic open architecture of the original IBM PC would soon be no more.

In a testimony to how guarded the relationship between the two supposed fast partners really was, IBM didn’t even tell Microsoft about their plans for the PS/2 line until just a few months before the public announcement. Joint Development Agreement or no, the former now suspected the latter’s loyalty more strongly than ever — and for, it must be admitted, pretty good reason: a smiling Bill Gates had recently appeared alongside two executives from Compaq and their DeskPro 386 on the front page of InfoWorld. Clearly he was still playing both sides of the fence.

Now, Bill Gates got the news that CP-DOS was to be renamed OS/2, and would join PS/2 as the software half of a business-microcomputing future that would once again revolve entirely around IBM. For some time, he wasn’t even be able to get a clear answer to the question of whether IBM intended to allow OS/2 to run at all on non-PS/2 hardware — whether they intended to abandon their old PC/AT customers after all, writing them off as collateral damage in their war against the clonesters and making MCA along with an 80286 a minimum requirement of OS/2.

On April 2, 1987, IBM officially announced the PS/2 hardware line and the OS/2 operating system, sending shock waves through their industry. Would this day mark the beginning of the end of the clone makers?

Any among that scrappy bunch who happened to be observing closely might have been reassured by some clear evidence that this was a far more jittery version of IBM than anyone had ever seen before, as exemplified by the splashy but rather chaotic rollout schedule for the new world order. Three PS/2 machines were to ship immediately: one of them based around an Intel 8086 chip very similar to the 8088 in the original IBM PC, the other two based around the 80286. But the 80386-based machine they were scrambling to get together in response to the Compaq DeskPro 386 — not that IBM phrased things in those terms! — wouldn’t come out until the summer. Meanwhile OS/2, which was still far from complete, likely wouldn’t appear until 1988. It was a far cry from the unified, confident rollout of the System/360 mainframe line more than two decades earlier, the seismic computing event IBM now seemed to be consciously trying to duplicate with their PS/2 line. As it was, the 80286- and 80386-based PS/2 machines would be left in the same boat as the older PC/AT for months to come, hobbled by that monument to inadequacy that was MS-DOS. And even once OS/2 did come out, the 80386-based PS/2 Model 80 would still remain somewhat crippled for the foreseeable future by IBM’s insistence that OS/2 run on the the 80286 as well.

The first copies of the newly rechristened OS/2 to leave IBM and Microsoft’s offices did so on May 29, 1987, when selected developers who had paid $3000 for the privilege saw a three-foot long, thirty-pound box labelled “OS/2 Software Development Kit,” containing nine disks and an astonishing 3100 pages worth of documentation, thump onto their porch two months before Microsoft had told them it would arrive. As such, it was the first Microsoft product ever to ship early; less positively, it was also the first time they had ever asked anyone to pay to be beta testers. Microsoft, it seemed, was feeling their oats as IBM’s equal partners.

The first retail release of OS/2 also beat its announced date, shipping in December of 1987 instead of the first quarter of 1988. Thankfully, IBM listened to Microsoft’s advice enough to quell the very worst of their instincts: they allowed OS/2 to run on any 80286-or-better machine, not restricting it to the PS/2 line. Yet, at least from the ordinary user’s perspective, OS/2 1.0 was a weirdly underwhelming experience after all the hype of the previous spring. The Presentation Manager, OS/2’s planned GUI, had fallen so far behind amidst all the bureaucratic squabbling that IBM had finally elected to ship the first version of their new operating system without it; this was the main reason they had been able to release the remaining pieces earlier than planned. In the absence of the Presentation Manager, what the user got was the plumbing of a sophisticated modern operating system coupled to a command-line interface that made it seem all but identical to hoary old MS-DOS. I’ve already described in earlier articles how a GUI fits naturally with advanced features like multitasking and inter-application data sharing. These and the many other non-surface improvements which MS-DOS so sorely needed were there in OS/2, hidden away, but in the absence of a GUI only the programmer or the true power user could make much use of them. The rest of the world was left to ask why they had just paid $200 for a slightly less compatible, slightly slower version of the MS-DOS that had come free with their computers. IBM themselves didn’t quite seem to know why they were releasing OS/2 now, in this state. “No one will really use OS/2 1.0,” said Bill Lowe. “I view it as a tool for large-account customers or software developers who want to begin writing OS/2 applications.” With a sales pitch like that, who could resist? Just about everybody, as it happened.

OS/2 1.1, the first version to include the Presentation Manager — i.e., the first real version of the operating system in the eyes of most people — didn’t ship until the rather astonishingly late date of October 31, 1988. After such a long wait, the press coverage was lukewarm and anticlimactic. The GUI worked well enough, wrote the reviewers, but the whole package was certainly memory-hungry; the minimum requirement for running OS/2 was 2.5 MB, the recommended amount 5 MB or more, both huge numbers for an everyday desktop computers circa 1988. Meanwhile a lack of drivers for even many of the most common printers and other peripherals rendered them useless. And OS/2 application software as well was largely nonexistent. The chicken-or-the-egg-conundrum had struck again. With so little software or driver support, no one was in a hurry to upgrade to OS/2, and with so little user uptake, developers weren’t in a hurry to deliver software for it. “The broad market will turn its back on OS/2,” predicted Jeffrey Tarter, expressing the emerging conventional wisdom in the widely read insider newsletter Softletter. Phillipe Kahn of Borland, an executive who was never at a loss for words, started a meme when he dubbed the new operating system “BS/2.” In the last two months of 1988, 4 percent of 80286 owners and 16 percent of 80386 owners took the OS/2 plunge. Yet even those middling figures gave a rosier view of OS/2’s prospects than was perhaps warranted. By 1990, OS/2 would still account for just 1 percent of the total installed base of personal-computer operating systems in the United States, while the unkillable MS-DOS still owned a 66-percent share.

A Quick Tour of the OS/2 1.1 Presentation Manager


Presentation Manager boots into its version of a start menu, listing its installed programs. This fact of course means that, unlike Windows 1, Presentation Manager does include the concept of installing applications rather than just working with them at the file level. That said, it still remains much more text-oriented than modern versions of Windows or contemporary versions of MacOS. Applications are presented in the menu as a textual list, unaccompanied by icons. Only minimized applications and certain always-running utilities appear as icons on the “desktop,” which is still not utilized as the general-purpose workspace we’re familiar with today.

Still, in many ways Presentation Manager 1.1 feels vastly more intuitive today than Windows 1. The “tiled windows” paradigm is blessedly gone. Windows can be dragged freely around the screen and can overlay one another, and niceties like the sizing widgets all work as we expect them to.

Applications can even open sub-windows that live within other windows. You can see one of these inside the file manager above.

One area where Presentation Manager is less like the Macintosh than Windows 1, but more like current versions of Microsoft Windows, is in its handling of menus. The one-click menu approach is used here, not the click-and-hold approach of the Mac.

Here we’ve opened a DOS box for running vanilla MS-DOS software. Only one such application can be run at a time, thanks to IBM’s insistence that OS/2 should run on the 80286 processor.

Presentation Manager includes a control panel for managing preferences that’s far slicker than the one included in Windows 1. Yet it shipped with a dearth of the useful little applets Microsoft included with Windows right from the beginning. There isn’t so much as a decent text editor here. Given that IBM would struggle mightily to get third-party developers to write applications for OS/2, such stinginess was… not good.

Amidst all of the hoopla over the introduction of the PS/2 and OS/2 back in the spring of 1987, Byte‘s editor-in-chief Philip Lemmons had sounded a cautionary note to IBM that reads as prescient today:

With the introduction of the PS/2 machines, IBM has begun to compete in the personal-computer arena on the basis of technology. This development is welcome because the previous limitations of the de-facto IBM standard were painfully obvious, especially in systems software. The new PS/2 “standard” offers numerous improvements: the Micro Channel is a better bus than the PC and AT buses, and it provides a full standard for 32-bit buses. The VGA graphics standard improves on EGA. The IBM monitors for the PS/2 series take a new approach that will ultimately deliver superior performance at lower prices. IBM is using 3.5-inch floppy disks that offer more convenience, capacity, and reliability than 5.25-inch floppy disks. And OS/2, the new system software jointly developed by Microsoft and IBM, will offer advances such as true multitasking and a graphic user interface.

Yet a cloud hangs over all this outstanding new technology. Like other companies that have invested in the development of new technology, IBM is asserting proprietary rights in its work. When most companies do this in most product areas, we expect and accept it. When one company has a special role of setting the de-facto standard, however, the aggressive assertion of proprietary rights prevents the widespread adoption of the new standard and delays the broad distribution of new technology.

The personal-computer industry has waited for years for IBM to advance the standard, and now, depending on IBM’s moves, may be unable to participate in that advancement. If so, the rest of the industry and the broad population of computer users still need another standard for which to build and buy products — a standard at least as good as the one embodied in the PS/2 series.

Lemmons’s cautions were wise ones; his only mistake was in not stating his concerns even more forcefully. For the verdict of history is clear: PS/2 and OS/2 are the twin disasters which mark the end of the era of IBM’s total domination of business-oriented microcomputing. The PS/2 line brought with it a whole range of new hardware standards, some of which, like new mouse and keyboard ports and a new graphics standard known as VGA, would remain with us for decades to come. But these would mark the very last technical legacies of IBM’s role as the prime mover in mainstream microcomputing. Other parts of the PS/2 line, most notably the much-feared proprietary MCA bus, did more to point out the limits of IBM’s power than the opposite. Instead of dutifully going out of business or queuing up to buy licenses, third-party hardware makers simply ignored MCA. They would eventually form committees to negotiate new, open bus architectures of their own — just as Philip Lemmons predicts in the extract above.

OS/2 as well only served to separate IBM’s fate from that of the computing standard they had birthed. It arrived late and bloated, and went largely ignored by users who stuck with MS-DOS — an operating system that was now coming to be seen not as IBM’s system-software standard but as Microsoft’s. IBM’s bold bid to cement their grip on the computer industry only caused it to slip through their fingers.

All of which placed Microsoft in the decidedly strange position of becoming the prime beneficiary of the downfall of an operating system which they had done well over half the work of creating. Given the way that Bill Gates’s reputation as the computer industry’s foremost Machiavelli precedes him, some have claimed that he planned it all this way from the beginning. In their otherwise sober-minded book Computer Wars: The Fall of IBM and the Future of Global Technology, Charles H. Ferguson and Charles R. Morris indulge in some elaborate conspiracy theorizing that’s all too typical of the whole Gates-as-Evil-Genius genre. Gates made certain that his programmers wrote OS/2 in 80286 assembly language rather than a higher-level language, the authors claim, to make sure that IBM couldn’t easily adapt it to take advantage of the more advanced capabilities of chips like the 80386 after his long-planned split with them finally occurred. In the meantime, Microsoft could use the OS/2 project to experiment with operating-system design on IBM’s dime, paving the way for their own eventual MS-DOS replacement.

If Gates expected ultimately to break with IBM, he has every interest in ensuring OS/2’s failure. In that light, tying the project tightly to 286 assembler was a masterstroke. Microsoft would have acquired three years’ worth of experience writing an advanced, very sophisticated operating system at IBM’s elbow, applying all the latest development tools. After the divorce, IBM would still own OS/2. But since it was written in 286 assembler, it would be almost utterly useless.

In reality, the sheer amount of effort Microsoft put into making OS/2 work over a period of several years — far more effort than they put into their own Windows over much of this period — argues against such conspiracy-mongering. Bill Gates was unquestionably trying to keep one foot in IBM’s camp and one foot in the clone makers’, much to the frustration of both, who equally craved his undivided loyalty. But he had no crystal ball, and he wasn’t playing three-dimensional chess. He was just responding, rather masterfully, to events on the ground as they happened, and always — always — hedging all of his bets.

So, even as OS/2 was getting all the press, Windows remained a going concern, Gates’s foremost hedge against the possibility that the vaunted new operating system might indeed prove a failure and MS-DOS might remain the standard it had always been. “Microsoft has a religious approach to the graphical user interface,” said the GUI-skeptic Pete Peterson, vice president of WordPerfect Corporation, around this time. “If Microsoft could choose between improved earnings and growth and bringing the graphical user interface to the world, they’d choose the graphical user interface.” In his own way, Peterson was misreading Gates as badly here as the more overheated conspiracy theorists have tended to do. Gates was never one to sacrifice profits to any ideal. It was just that he saw the GUI itself — somebody’s GUI — as such an inevitability. And thus he was determined to ensure that the inevitability had a Microsoft logo on the box when it became an actuality. If the breakthrough product wasn’t to be the OS/2 Presentation Manager, it would just have to be Microsoft Windows.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris; InfoWorld of October 7 1985, July 7 1986, August 12 1991, and October 21 1991; PC Magazine of November 12 1985, April 12 1988, December 27 1988, and September 12 1989; New York Times of August 22 1985; Byte of June 1987, September 1987, October 1987, April 1988, and the special issue of Fall 1987; the episode of the Computer Chronicles television program called “Intel 386 — The Fast Lane.” Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)


  1. Admittedly, this was already beginning to change as IBM was having this debate: the X Window project was born at MIT in 1984. 

July 12, 2018

Inkle

Announcing... ink Jam!

July 12, 2018 09:41 PM

Interested in ink? Been meaning to have a go at learning it, but never had the motivation? Or are you a skilled inkist, looking to show off your skills?

image

Either way, if you want a chance to take ink for a spin, we've got a great opportunity coming up. In collaboration with The Pixel Hunt, the studio behind the multi-award-winning Bury Me, My Love, we're launching ink Jam - a 3 day game jam for games made in ink.

About ink

ink is a scripting language for interactive fiction that's designed to be used by humans. Using a simple but powerful mark-up based approach, it's easy to create a branching flow that responds and shifts based on everything the player chooses. It's quick to test, redraft and restructure. But ink is also powerful, with significant features that allow for complex state-tracking world-modelling. It comes with an IDE, javascript output, and Unity integration.

image

It's totally free, and being used by studios all over the world to create all kinds of interactive experiences, from a news-game about Uber drivers created by British newspaper the Financial Times, an E3 favourite (https://www.neocabgame.com/) and an IGF finalist (http://wherethewatertasteslikewine.com/), through to a sailing qualitification course, Air New Zealand's chatbot, a game entirely written in emoji, a procedural ASCII dungeon crawler, and a celebrated globe-trotting adventure to name but a few.

How to get involved

The jam is being hosted on itch.io over the weekend of August 31st to September 3rd. Entries can be submitted via the itch jam page, and we'll be judging the results for creativity and technical wizardy.

Once the jam starts we'll be announcing a theme to help get your ideas going. Until then, if you need any inspiration, check out one of the many games written and released using ink, or visit our Patreon tips page for some of the stranger and more powerful things ink can do!

image

Choice of Games

Blood Money — Take over your crime family with ghost power!

by Rachel E. Towers at July 12, 2018 04:41 PM

We’re proud to announce that Blood Money, the latest in our popular “Choice of Games” line of multiple-choice interactive-fiction games, is now available for Steam, Android, and on iOS in the Choice of Games Omnibus app. It’s 33% off until July 19th!

By the power of your blood, you and your ghosts will take over your crime family!

Blood Money is a 290,000-word interactive novel by Hannah Powell-Smith. It’s entirely text-based, without graphics or sound effects, and fueled by the vast, unstoppable power of your imagination.

When your cousin murders the city’s most notorious crime boss–your mother–a power struggle erupts across the criminal underworld. As your sisters Octavia and Fuschia vie for control, you alone in the family possess the blood magician’s power to summon and command ghosts. They hunger for your blood; if it’s blood they want, then blood they’ll have.

Will you take over the family business? Remain loyal, go it alone, or defect to a rival gang?

• Play as male, female, or non-binary; gay, straight, bi, or ace.
• Embrace your unearthly gifts and build connections with the dead, or banish ghosts to the underworld to protect the living
• Look for love, or manipulate your friends and allies; Betray those who trust you, or maintain family loyalty no matter the cost
• Fight a gang war for your family, defect to your rivals, or reject a life of crime
• Negotiate volatile family relations: resolve squabbles, fall in line as a loyal lieutenant, or sharpen your knife for backstabbing
• Influence citywide politics: exploit the Mayor’s office for your own ends, or use your connections for a greater cause

What will you sacrifice for freedom, and who will you sacrifice for power?

We hope you enjoy playing Blood Money. We encourage you to tell your friends about it, and recommend the game on Facebook, Twitter, Tumblr, and other sites. Don’t forget: our initial download rate determines our ranking on the App Store. The more times you download in the first week, the better our games will rank.

July 11, 2018

"Adventuron"

The battle for vertical space on Android

by Chris Ainsley ([email protected]) at July 11, 2018 12:15 AM

I thought I'd do a small blog post on the problems I've been having with Android, for about a year now. This post relates to just trying to get a serviceabile keyboard-based UI going on Android, and has nothing to do with the gesture / touch optimized UI that is scheduled for release later in the year.



The first problem is that Android's soft navigation buttons are impossible to remove (without hacking your phone) when the soft keyboard appears. This eats up valuable screen real estate.

The second problem (in Android OS) is that when the keyboard is visible (together with the soft nav buttons), so is the status bar at the top of the phone. This has the unfortunate effect of overwriting some portion of the location header text. It also takes you out of the immersion in a pixel typographic experience (if that is your game). Pixel fonts look great, but only when not mixed and matched with modern antialiased typography. I guess Adventuron is an edge case!

The third problem is that the GBoard software keyboard (the standard Google software keyboard that ships with the majority of phones), is antisocial when it comes to vertical space. I can set an INPUT widget to require noautocomplete, and the old Google Keyboard would honour this request. The new Google keyboard still takes vertical space but doesn't show autocompetions. It looks hideous. This can be rectified with installing a keyboard - such as simple keyboard, but it's too much to ask a casual player to change their system keyboard. I'm very disappointed with Google's decision to not give app developers the option to use that vertical space for themselves. Maybe it's a security feature - to stop web apps from pretending to be the autocomplete bar? Who knows? ... but it makes using Adventuron on Android an awful experience.

The fourth problem, and one that I have been pushing to have fixed in Chrome for over a year now is the full screen immersive mode is simply broken in that the bottom area of the view area is obfuscated by the visible keyboard. You can't scroll down and see the invisible area. It's just plan broken.... but there is some hope that this *may* be fixed in Chrome 69. Let's see.

... for sure problem #1 and problem #2 are not getting solved any time soon. In my view, these are poor design decisions at an OS level, and the only workable solution is a bespoke written software keyboard as a bottom clipped widget... So that's ultimately what I'll have to do.

Assuming that problem #4 is fixed, and assuning that the player installs a keyboard that respects the NOAUTOCOMPLETE attribute, then this is the best that will be possible on Android (until I build my own integrated keyboard):



In terms of iOS, well, no problems. well done Apple.

Never say never, it's possible that GBoard is fixed, the Chrome fullscreen issue is fixed, that new versions of Android do away with the status bar showing when the keyboard is up, and Android allows the sofware nav buttons to disappear when the keyboard is up. Or maybe Google just builds in per-app keyboard preferences, that lets users select a more optimal keyboard -only for text adventures. But it's a lot of ifs. Right now, and at least a year into the horizon, Adventuron will be sub-optimal on Android, at least, without a hack of the highest order (building my own software keyboard).

The good news is that the screens are getting longer, and that it actually does feel quite good to type on a mobile keyboard. In time, all of this will be resolved, let's hope.

July 10, 2018

Emily Short

Mailbag: World Simulation for Story Generation

by Emily Short at July 10, 2018 04:40 PM

Hi Emily,

not sure if this is too specific a question, but I wonder if you can help me out:

Last NaNoGenMo I started out on doing a more sophisticated version of Meehan’s Talespin. I ended up doing lots of planning, but not too much implementation, so I’m going to continue my efforts this year. The biggest part of it all seems to be a world simulator. This would be where all the actions the characters are planning to do get executed. In a way this seems very similar to what is part of every IF/adventure game (locations, objects, etc), and I have programmed some of those in my micro computer days on a ZX Spectrum… But I feel this needs to be more sophisticated (and on a larger scale), and general, as there will be a multitude of possible actions that can modify the world state. And it might also have to be ‘fractal’ (for want of a better word), as there is landscape in the wild, cities (which are in the landscape, but consist of many locations where there would otherwise just be a single location of forest), and rooms in houses in cities, etc.

Are you aware of any systems in existence that I could use for that? I am kind of hoping it would be similar to a physics package for arcade games, a ready-made package that I can slot my content data and planner actions into. Or do I have to write my own?

Any pointers much appreciated!

I’ll try to answer this in its own terms below, but first I feel like those answers should come with some warnings.

This is presumably obvious, but there is no such thing as a generic world simulator. Any simulation is making decisions about what to model and what to ignore, what level of abstraction to use, what kinds of state are interesting to preserve, and so on. It’s misleading to think of “a world simulator” as “like” a physics engine. A physics engine is a type of world simulator, one that focuses on a comparatively well-defined kind of state: position, velocity, acceleration, elasticity, and so on. (I don’t mean it’s easy to write a good one! Just that the domain of simulation is reasonably defined.)

Text adventure world models tend to focus on rooms, furnishings, and items of the proper size to be carried by a human being, but that’s also very much a subset of what you might theoretically want to model. Items that are abstract (“love”, “communism”), intangible (“the smell of grass”, “a traumatic memory”), larger than a room (“Antarctica”), or smaller than a human being could pick up (“ant”, “quark”) are less likely to be simulated — though of course some of us have worked on extending the world model to include support for conversation topics, knowledge, and relationships.

Moreover, the choices you make about what to simulate will very heavily affect what kind of story you get out the other end. Finally, the way in which your simulation is described will affect how easily it can be plugged into a planner system.

I’d also warn against thinking of the content data as a minor aspect of the system. It’s common for research on story generation to focus on the process: is this grammar-based, is it a planner, is it partially ordered or not, are we forward-chaining or backward-chaining (or a little of each), are we planning the events of the story separately from the discourse layer, how are we measuring coherence and novelty (if those are even our concerns at all). Those are all interesting questions to ask, but the features of the content data also make a very significant difference to output quality, and a corpus implicitly encodes a lot of things about the possible story space that may not be inherently required by the generative method.

In every project I’ve worked on where there was a target output quality (as opposed to toy or experimental projects), most of my development time on a first release has gone into working and reworking the content data, rather than refining the process of generation. Once the qualities of a good data set are well understood, it’s possible to make additional data sets that conform to those expectations in significantly less time, and sometimes to support the process with tooling. But I tend to regard experiments in story generation to be fundamentally unfinished unless the experimenter also presents reasonably polished artifacts of their generative process. If they’re just describing a process and leaving high-quality artifact creation as an exercise for the future, then most of the work is (in my experience) still ahead of them.

So. That is the end of my speech. Now that we’re done with warnings:

An IF world model might provide some of what you need, if you’re focused mostly on actions that involve moving objects around and changing object state. After the Spectrum days, those got quite a bit more sophisticated. Both Inform 7 and TADS 3 would be able to handle landscapes that included highly-detailed urban spaces and less-detailed forest spaces, for instance.

Generally speaking, TADS 3 offers more pre-made world model features in its library — so for instance if you want your plans to involve detailed actions involving candles or flashlights or dark rooms, TADS 3 is more likely to have pre-existing implementations, and it can perform tasks like figuring out how sound or odor might propagate through a world space. On the other hand, Inform is designed to be pretty flexible about adding new concepts and relationships to its model world. Both systems have extension libraries by third parties that you could plug in as well.

I don’t know whether any of the implementations of this are up to date with the current versions of TADS and Inform, but Nate Cull wrote several implementations of Reactive Agent Planner, a library designed to allow NPCs to make plans about how to navigate text adventure environments — and re-plan in response to environmental changes caused by the player or other NPCs. I realize that you want to build your own planner, but this might be interesting to have a look at anyway.

Also in the IF space, Curveship has a pretty basic world model to start with, relative to Inform or (especially) TADS 3. On the other hand, it’s in Python (which might make it hook together better with whatever else you’re doing), and it provides a lot of options for how to present the narrative once the events of the story have been generated, such as the ability to tell the story from the perspective of one or another specific viewer.

Coding a large interactive environment in any of these languages is a fairly substantial task, and you would presumably also need to allow yourself time to get familiar with the coding expectations of each.

If you’re more interested in stories about people, social situations, and feelings, you might want a model that instead looked at modeling conversation or social practices between characters. Most of the social practice libraries I know of are proprietary rather than generally available (and some are quite quirky), but if you are interested in thinking about planning for social interaction stories, you might want to look at Versu, Prom Week, and Façade for pointers. Scealextric has a large body of paired actions and reactions, as well — in that case, devoid of any parameters about the physical requirements for a particular reaction.

*

Another place to look would be at academic story-generators after TALESPIN — of which there are many. Most of them are pretty much invisible in the IF world. This page on Story Generator Algorithms gives an overview. On a quick pass through I didn’t identify any obvious solutions for you here, but further research into the projects here might yield more.

(I do have a post on MEXICA upcoming later this month.)

*

And a couple of asides, mostly for other readers: another approach is to layer story interpretation on top of an existing simulation engine, allowing the engine to create a series of coherent events and then searching that space for event sequences that would qualify as an interesting story in some regard.

This is what Jacob Garbe does with his Dwarf Grandpa project, which takes Dwarf Fortress output and looks for ways to pull those simulated events into a coherent narrative. That post also talks about what Garbe calls “Recursive Narrative Scaffolding,” another story generation technique narratively linked to the tabletop RPG Microscope.

James Ryan has also worked in this space, building a simulator that creates hundreds of years of history that can then be mined for story events: this is the basis of his Bad News project (with Ben Samuel and Adam Summerville):

Beyond its novelty, Bad News is a deeply AI-driven game. Each Bad News town is procedurally generated using the Talk of the Town AI framework. Employing a method inspired by Dwarf Fortress‘s world-generation procedure, each town is simulated from its founding in 1839 up to the summer of 1979, when gameplay takes place. Over the course of this simulation, hundreds of generated town residents live out their lives, embedding themselves in rich social networks and forming subjective (often false) beliefs about the town. This provides an abundance of narrative material and dramatic intrigue—family feudslove trianglesstruggling businesses—that exceeds the capacities of a 45-minute playthrough and that could not have tractably been hand-authored. Several players have reported feeling like they were transported to the generated towns that they visited.

sub-Q Magazine

Making Interactive Fiction: Narrative Design for Writers (Part 2)

by Bruno Dias at July 10, 2018 01:41 PM

This is part two of a two-part series about narrative design aimed at traditional-media writers and IF authors.

In part one, I walked through a series of questions that help clarify a narrative design. Now, I’m going to talk about how one puts all of this together.

The output of narrative design as a process is a design document — in games, it’s common to produce a GDD (game design document) outlining an entire game, but I like writing smaller and shorter treatments of individual systems, mechanics, or simply of a game’s narrative structure apart from any mechanic. In interactive fiction, especially short or experimental fiction, GDDs are rarely written. But I find a narrative design document, marrying major plot events to chunks of content and spelling out how they’re structured and accessed, to be more useful than a screenplay-style story outline.

In fact, flat story outlines are often hard to read or misleading when it comes to interactive narrative. I tend to draw diagrams; some of my work is “documented” in photos of whiteboard sketches or cramped up in notebook pages. Spelling out how things are supposed to work is great for finding holes in your own logic or seeing combinations of actions players could take that you didn’t anticipate. Prototyping the main narrative branching points as a simple Twine is a popular approach when dealing with that kind of structure; it lets you both see the shape of the narrative, and play through it to explore individual versions of the plot.

At this point, you can start asking some more systematic questions about the story: Does a typical playthrough get across everything I want to say? Does every playthrough? What are the things players will always see? What are the things that only a small fraction of players will see? If players can fail, where and how can they fail? If players are managing some kind of resource, what does that resource economy look like?

Those questions will lead you towards iterating on your design, and they’ll help clarify a lot of underlying issues that we often don’t think about until it’s a little too late. A game where players can miss large segments of story, and therefore get a version of the story that heavily elides important events or backstory, is very different from a game trying to make sure players get a chance to see everything important. Seeing what your structure does will help you figure out what you want it to do, give yourself an aim to steer by.

I know this seems like a lot of prepwork before you begin writing (though nothing says you can’t sketch out scenes, moments, or snippets of story before thinking through structural stuff — I often do that, too; it’s useful for figuring out what the story really is). But the idea here is to get you off the autopilot that tools (whether Twine or something else) tend to induce and get you thinking about how to construct a story as a sort of machine of meaning that players will engage with.


Bruno Dias is a writer and narrative designer based in São Paulo. His work has appeared in video game publications (Waypoint, PC Gamer), games (Where the Water Tastes Like Wine) and interactive fiction on Sub-Q and elsewhere

The post Making Interactive Fiction: Narrative Design for Writers (Part 2) appeared first on sub-Q Magazine.

July 09, 2018

Choice of Games

Author Interview, Hannah Powell-Smith

by Mary Duffy at July 09, 2018 08:41 PM

By the power of your blood, you and your ghosts will take over your crime family! When your cousin murders the city’s most notorious crime boss—your mother—a power struggle erupts across the criminal underworld. As your sisters Octavia and Fuschia vie for control, you alone in the family possess the blood magician’s power to summon and command ghosts. They hunger for your blood; if it’s blood they want, then blood they’ll have.

Blood Money is a 275,000-word interactive novel by Hannah Powell-Smith. I sat down with Hannah to talk blood magic and IF. Blood Money releases this Thursday, July 12th. 

Tell me about the world Blood Money is set in and what inspired it.

Blood Money is set in Nasri City, a tropical canal city where ghosts are ever-present, flitting between the living world and the underworld. And that’s where blood magicians come in. Ghosts are drawn to drink magicians’ blood, and unlike ordinary people, blood magicians can summon and communicate with them.

In Nasri City, blood magicians are distrusted at best, despised at worst. Unless you’re wealthy, it’s not an easy place to live: steeped in crime and corruption, there’s a wide divide between decadent aristocracy and the working classes here. You have vast, luscious parkland which is only accessible to certain sections of society, while other districts are run-down and densely populated.

But a crisis point is coming. The merchant classes are growing, rival gangs are working to topple the old crime families, and some blood magicians are building a community of solidarity. Throughout the game, you can influence that tipping point in the direction you choose.

I’ve been interested in ghosts as a theme for a long time, and their blood-drinking was inspired by tales of Odysseus and Aeneas’ journeys to the underworld. The history of Renaissance Italy was an inspiration for the family squabbles and the weight of ancestry, and of course all those Venetian canals. Visuals also played a big part for me: noir imagery of dark cityscapes in the driving rain, images of Sao Joao Batista Cemetery in Rio de Janeiro, the creepy faceless sculptures by Kevin Francis Gray…mixed together, it all combined to make a dark fantasy setting that I’m very proud of.

What brought you to writing interactive fiction?

As teenagers, my now-wife and I created mods for the videogame Baldur’s Gate II, which sharpened coding and design skills for both of us. Along with the interactive books like Fabled Lands that I read as a child, and tabletop roleplaying, that taught me about branching narratives. Then, when I came across Twine many years later, I started making games of my own. I released several interactive short stories about fraught relationships (one of my favourite game topics) and got involved with the interactive fiction community.It was great to make the games I wanted to play, but I didn’t consider making money from it until sub-Q Magazine approached me about reprinting one of my games. I published more games with them, and then, as I’d enjoyed the company’s work for a long time, I got in touch with Choice of Games. The rest is history!

What was the most challenging part about creating the game for you?

I love working with ChoiceScript itself, but for this game the planning was hard work. I had never outlined such a big interactive project, and so increasing the scale, widening the variety of end states, making the plot satisfyingly branchy while making sure the pacing works…it was a challenge! But the guidance through the outlining process made for a much stronger game, and because a lot of that effort was front-loaded, it made the actual game far more straightforward to develop.

What are you working on next?

At the end of the month, I’m teaching at the Infinite Journeys Interactive Fiction Summer School at the British Library. In the longer term, I’m working on a second game for Choice of Games about attending an exclusive finishing school in order to regain your family’s fortunes through marriage, getting into university, or otherwise distinguishing yourself. There are etiquette classes, ballroom dances, rival cliques, and dark secrets bubbling away beneath the surface. And plenty of potential backstabbing—though not as literally as in Blood Money. In this game you take your enemies down with propriety.

Short Answer, Bernard Pivot-style:

Favorite color?
Green.

Favorite word?
Tropical.

Profession other than your own you would like to try.
Academic in the humanities area.

Profession you would never like to try.

Dancer.

Venice or Amsterdam?
Amsterdam—we’re going through a heatwave in the UK so the idea of being in Venice makes me want a lie down!

The XYZZY Awards

The Xyzzymposium Returns

by Joey Jones at July 09, 2018 09:41 AM

The Xyzzymposium is returning! The aim of the Xyzzymposium is to provide in depth analysis of the finalist entries for each of the XYZZY Award categories. It was last held in 2015-2016 for the 2014 XYZZY Awards. The goal of the Xyzzymposium is still as it was initially described:

A lot of the critical writing in the IF world comes in the forms of general reviews; that’s great, but we wanted to see more in-depth writing that considered games through specific foci.

The XYZZYs have no cash prizes or shiny trophies, no red-carpet parties; all we really offer is the respect of your peers, and a slightly more prominent mark in the history of the medium. Both of these become a little more concrete if they’re combined with in-depth critical attention. The Xyzzymposium isn’t intended to be a triumphal march; we’re not here to lavish praise on anointed champions. The purpose of the Xyzzymposium is to show that we’re taking a work seriously enough to wrestle with it.

We’re going to be covering the 2017 Awards (but if we get enough writers we may take a stab at covering the missing years). Writers are being contacted now, with the intention of publishing the reviews throughout the second half of September.

Taking on the mantle of Xyzzymposiarch from Sam Kabo Ashwell, this year the organiser is Joey Jones. You may know Joey for such games as The Chinese Room, Andromeda Dreaming, Sub Rosa, or Trials of the Thief-Taker. 

July 07, 2018

"Adventuron"

The Final Minute

by Chris Ainsley ([email protected]) at July 07, 2018 06:42 AM

The Final Minute as an experimental piece of interactive fiction, that integrates an innovative real-time text-based sports-action mechanic and multiple endings. Blurring the line between fiction and reality, TFM integrates the real-world news-cycle into gameplay.

As timeless as the pop songs in Shrek, TFM is best experienced today.

What would you do, in THE FINAL MINUTE?


'Secret' map-drawing commands

by Chris Ainsley ([email protected]) at July 07, 2018 05:00 AM

Adventuron doesn't yet have an integrated auto-mapper, but it does have something of a workaround that might help help players draw something of a map.

These commands should be typed "in-game", at the command line....

Using "The Quest For the Holy Snail" as a demo game here:

Type in:

> drawmap



... to output a text based map of locations that you have already visited.



Type in:

> drawmap all



.... to output a text based map of all locations (based on the navigation table).


Right now, this process isn't super smooth, so my apologies, but the text based maps need to opened in a text editor, then copy and pasted into https://www.planttext.com/ and the map will be rendered, courtesy of the plantuml and graphviz libraries.




The maps aren't north-centric, and therefore are sub-optimal for instant "grokking", but this will improve in future releases.

It's difficult to know if the "all" variant of this encourages cheating, or spoiling the game for the player, but if an author is concerned about the all variant, they can block it using a custom T1 handler.

July 06, 2018

The Digital Antiquarian

Doing Windows, Part 3: A Pair of Strike-Outs

by Jimmy Maher at July 06, 2018 12:41 PM

Come August of 1984, Microsoft Windows had missed its originally announced release date by four months and was still nowhere near ready to go. That month, IBM released the PC/AT, a new model of their personal computer based around the more powerful Intel 80286 processor. Amidst the hoopla over that event, they invited Microsoft and other prominent industry players to a sneak preview of something called TopView, and Bill Gates got an answer at last to the fraught question of why IBM had been so uninterested in his own company’s Windows operating environment.

TopView had much in common with Windows and the many other attempts around the industry, whether already on the market or still in the works, to build a more flexible and user-friendly operating environment upon the foundation of MS-DOS. Like Windows and so many of its peers, it would offer multitasking, along with a system of device drivers to isolate applications from the underlying hardware and a toolkit for application developers that would allow them to craft software with a consistent look and feel. Yet one difference made TopView stand out from the pack — and not necessarily in a good way. While it did allow the use of a mouse and offered windows of a sort, it ran in text rather than graphics mode. The end result was a long, long way from the Macintosh-inspired ideal of intuitiveness and attractiveness which Microsoft dreamed of reaching with their own GUI environment.

TopView at the interface level resembled something IBM might have produced for the mainframe market back in the day more than it did Windows and the other microcomputer GUI environments that were its ostensible competitors. Like IBM’s mainframe system software, it was a little stodgy, not terribly pretty, and not notably forgiving toward users who hadn’t done their homework, yet had a lot to offer underneath the hood to anyone who could accept its way of doing business. It was a tool that seemed designed to court power users and office IT administrators, even as its competitors advertised their ease of use to executives and secretaries.

Within its paradigm, though, TopView was a more impressive product than it’s generally given credit for being even today. It sported, for example, true preemptive multitasking1 months before the arrival of the Commodore Amiga, the first personal computer to ship with such a feature right out of the box. Even ill-behaved vanilla MS-DOS applications could be coerced into multitasking under TopView. Indeed, while IBM hoped, like everyone else making extended operating environments, to tempt third-party programmers into making native applications just for them, they were willing to go to heroic lengths to get existing MS-DOS applications working inside TopView in the meantime. They provided special specifications files — known as “Program Information Files,” or PIFs — for virtually all popular MS-DOS software. These told TopView exactly how and when their subjects would try to access the computer’s hardware, whereupon TopView would step in to process those calls itself, transparently to the ill-behaved application. It was an admittedly brittle solution to a problem which seemed to have no unadulteratedly good ones; it required IBM to research the technical underpinnings of every major new piece of MS-DOS software that entered the market in order to keep up with an endless game of whack-a-mole that was exhausting just to think about. Still, it was presumably better than punting on the whole problem of MS-DOS compatibility, as Visi On had done. Whatever else one could say about IBM’s approach to extending MS-DOS, they thus had apparently learned at least a little something from the travails of their competitors. Even the decision to run in character mode sounds far more defensible when you consider that up to two-thirds of MS-DOS computers at the time of TopView’s release were equipped only with a monochrome screen capable of no other mode.

Unfortunately, TopView failed to overcome many of the other issues that dogged its competitors. Having been so self-consciously paired with the pricey PC/AT, it was still a bit out in front of the sweet spot of hardware requirements, requiring a 512 K machine to do much of anything at all. And it was still dogged by the 640 K barrier, that most troublesome of all aspects of MS-DOS’s primitiveness. With hacks to get around the barrier still in their relative infancy, TopView didn’t even try to support more memory, and this inevitably limited the appeal of its multitasking capability. With applications continuing to grow in complexity and continuing to gobble up ever more memory, it wouldn’t be long before 640 K wouldn’t be enough to run even two pieces of heavyweight business software at the same time, especially after one had factored in the overhead of the operating environment itself.

A Quick Tour of TopView


While it isn’t technically a graphical user interface, TopView shares many features with contemporaneous products like Visi On and Microsoft Windows. Here we’re choosing an application to launch from a list of those that are installed. The little bullet to the left of each name on the list is important; it indicates that we have enough memory free to run that particular application. With no more than 640 K available in this multitasking environment and no virtual-memory capability, memory usage is a constant concern.

Here we see TopView’s multitasking capabilities. We’re running the WordStar word processor and the dBase database, two of the most popular MS-DOS business applications, at the same time. Note the “windows” drawn purely out of text characters. Preemptive multitasking like TopView is doing here wouldn’t come to Microsoft Windows until Windows 95, and wouldn’t reach the Macintosh until OS X was released in 2001.

We bring up a TopView context window by hitting the third — yes, third — button on IBM’s official mouse. Here we can switch between tasks, adjust window sizes and positions (albeit somewhat awkwardly, given the limitations of pure text), and even cut and paste between many MS-DOS applications that never anticipated the need for such a function. No other operating environment would ever jump through more hoops to make MS-DOS applications work like they had been designed for a multitasking windowed paradigm from the start.

Some of those hoops are seen above. Users make MS-DOS applications run inside TopView by defining a range of parameters explaining just what the application in question tries to do and how it does it. Thankfully, pre-made definition files for a huge range of popular software shipped with the environment. Brittle as heck though this solution might be, you certainly can’t fault IBM’s determination. Microsoft would adopt TopView’s “Program Information File,” or PIF, for use in Windows as well. It would thereby become the one enduring technical legacy of TopView, persisting in Windows for years after the IBM product was discontinued in 1988.

One of the hidden innovations of TopView is its “Window Design Aid,” which lets programmers of native applications define their interface visually, then generates the appropriate code to create it. Such visually-oriented time-savers wouldn’t become commonplace programming aids for another decade at least. It all speaks to a product that’s more visionary than its reputation — and its complete lack of graphics — might suggest.

TopView shipped in March of 1985 — later than planned, but nowhere near as late as Microsoft Windows, which was now almost a full year behind schedule. It met a fractious reception. Some pundits called it the most important product to come out of IBM since the release of the original IBM PC, while others dismissed it as a bloated white elephant that hadn’t a prayer of winning mainstream acceptance — not even with the IBM logo on its box and a surprisingly cheap suggested list price of just $149. For many IBM watchers — not least those watching with concern inside Microsoft — TopView was most interesting not so much as a piece of technology as a sign of IBM’s strategic direction. “TopView is the subject of fevered whispers throughout the computer industry not because of what it does but because of what it means,” wrote PC Magazine. It had “sent shivers through the PC universe and generated watchfulness” and “possibly even paranoia. Many experts think, and some fear, that TopView is the first step in IBM’s lowering of the skirt over the PC — the beginning of a closed, proprietary operating system.”

Many did indeed see TopView as a sign that IBM was hoping to return to the old System/360 model of computing, seizing complete control of the personal-computing market by cutting Microsoft out of the system-software side. According to this point of view, the MS-DOS compatibility IBM had bent over backward to build into TopView needed last only as long as it took third-party developers to write native TopView applications. Once a critical mass of same had been built up, it shouldn’t be that difficult to decouple TopView from MS-DOS entirely, turning it into a complete, self-standing operating system in its own right. For Bill Gates, this was a true nightmare scenario, one that could mean the end of his business.

But such worries about a TopView-dominated future, to whatever extent he had them, proved unfounded. A power-user product with mostly hacker appeal in a market that revolved around the business user just trying to get her work done, TopView quickly fizzled into irrelevance, providing in the process an early warning sign to IBM, should they choose to heed it, that their omnipotence in the microcomputer market wasn’t as complete as it had been for so long in the mainframe market. IBM, a company that didn’t abandon products easily, wouldn’t officially discontinue TopView until 1988. By that time, though, the most common reaction to the news would be either “Geez, that old thing was still around?” or, more likely, “What’s TopView?”

Of course, all of this was the best possible news from Microsoft’s perspective. IBM still needed the MS-DOS they provided as much as ever — and, whatever else happened, TopView wasn’t going to be the as-yet-unreleased Windows’s undoing.

In the meantime, Bill Gates had Windows itself to worry about, and that was becoming more than enough to contend with. Beginning in February of 1984, when the planned Windows release date was given a modest push from April to May of that year, Microsoft announced delay after delay after delay. The constant postponements made the project an industry laughingstock. It became the most prominent target for a derisive new buzzword that had been coined by a software developer named Ann Winblad in 1983: “vaporware.”

Inside Microsoft, Windows’s reputation was little better. As 1984 wore on, the project seemed to be regressing rather than progressing, becoming a more and more ramshackle affair that ran more and more poorly. Microsoft’s own application developers kicked and screamed when asked to consider writing something for Windows; they all wanted to write for the sexy Macintosh.

Neil Konzen, a Microsoft programmer who had been working with the Macintosh since almost two years before that machine’s release, was asked to take a hard look at the state of Windows in mid-1984. He told Bill Gates that it was “a piece of crap,” “a total disaster.” Partially in response to that verdict, Gates pushed through a corporate reorganization, placing Steve Ballmer, his most trusted lieutenant, in charge of system software and thus of Windows. He reportedly told Ballmer to get Windows done or else find himself a job at another company. And in corporate America, of course, shit rolls downhill; Ballmer started burning through Windows project managers at a prodigious pace. The project acquired a reputation inside Microsoft as an assignment to be avoided at all costs, a place where promising careers went to die. Observers inside and outside the project’s orbit were all left with the same question: just what the hell was preventing all these smart people from just getting Windows done?

The fact was that Windows was by far the biggest thing Microsoft had ever attempted from the standpoint of software engineering, and it exposed the limitations of the development methodology that had gotten them this far. Ever since the days when Gates himself had cranked out their very first product, a version of BASIC to be distributed on paper tape for the Altair kit computer, Microsoft had functioned as a nested set of cults of personality, each project driven by if not belonging solely to a single smart hacker who called all the shots. For some time now, the cracks in this edifice had been peeking through; even when working on the original IBM PC, Gates was reportedly shocked and nonplussed at the more structured approach to project management that was the norm at IBM, a company that had already brought to fruition some of the most ambitious projects in the history of the computer industry. And IBM’s project managers felt the same way upon encountering Microsoft. “They were just a bunch of nerds, just kids,” remembers one. “They had no test suites, nothing.” Or, as another puts it:

They had a model where they just totally forgot about being efficient. That blew our minds. There we were watching all of these software tools that were supposed to work together being built by totally independent units, and nobody was talking to each other. They didn’t use any of each other’s code and they didn’t share anything.

With Windows, the freelancing approach to software development finally revealed itself to be clearly, undeniably untenable. Scott MacGregor, the recent arrival from Xerox who was Windows’s chief technical architect in 1984, remembers his frustration with this hugely successful young company — one on whose products many of the Fortune 500 elite of the business world were now dependent — that persisted in making important technical decisions on the basis of its employees’ individual whims:

I don’t think Bill understood the magnitude of doing a project such as Windows. All the projects Bill had ever worked on could be done in a week or a weekend by one or two different people. That’s a very different kind of project than one which takes multiple people more than a year to do.

I don’t think of Bill as having a lot of formal management skills, not in those days. He was kind of weak on managing people, so there was a certain kind of person who would do well in the environment. There were a lot of people at that time with no people skills whatsoever, people who were absolutely incompetent at managing people. It was the Peter Principle: very successful technical people would get promoted to management roles. You’d get thirty people reporting to one guy who was not on speaking terms with the rest of the group, which is inconceivable.

One has to suspect that MacGregor had one particular bête noir in mind when talking about his “certain kind of person.” In the eyes of MacGregor and many others inside Microsoft, Steve Ballmer combined most of Bill Gates’s bad qualities with none of his good ones. Like Gates, he had a management style that often relied on browbeating, but he lacked the technical chops to back it up. He was a yes man in a culture that didn’t suffer fools gladly, a would-be motivational speaker who too often failed to motivate, the kind of fellow who constantly talked at you rather with you. One telling anecdote has him visiting the beleaguered Windows team to deliver the sort of pep talk one might give to a football team at halftime, complete with shouts and fist pumps. He was greeted by… laughter. “You don’t believe in this?” Ballmer asked, more than a little taken aback. The team just stood there uncomfortably, uncertain how to respond to a man that MacGregor and many of the rest of them considered almost a buffoon, a “non-tech cheerleader.”

And yet MacGregor had problems of his own in herding the programmers who were expected to implement his grand technical vision. Many of them saw said vision as an overly slavish imitation of the Xerox Star office system, whose windowing system he had previously designed. He seemed willfully determined to ignore the further GUI innovations of the Macintosh, a machine with which much of Microsoft — not least among them Bill Gates — were deeply enamored. The most irritating aspect of his stubbornness was his insistence that Windows should employ only “tiled windows” that were always stretched the horizontal length of the screen and couldn’t overlay one another or be dragged about freely in the way of their equivalents on the Macintosh.

All of this created a great deal of discord inside the project, especially given that much of MacGregor’s own code allegedly didn’t work all that well. Eventually Gates and Ballmer brought in Neil Konzen to rework much of MacGregor’s code, usurping much of his authority in the process. As Windows began to slip through MacGregor’s fingers, it began to resemble the Macintosh more and more; Konzen was so intimately familiar with Apple’s dream machine that Steve Jobs had once personally tried to recruit him. According to Bob Belleville, another programmer on the Windows team, Konzen gave to Windows “the same internal structure” as the Macintosh operating system; “in fact, some of the same errors were carried across.” Unfortunately, the tiled-windows scheme was judged to be too deeply embedded by this point to change.

In October of 1984, Microsoft announced that Windows wouldn’t ship until June of 1985. Gates sent Ballmer out an “apology tour” of the technology press, prostrating himself before journalist after journalist. It didn’t seem to help much; the press continued to pile on with glee. Stewart Alsop II, the well-respected editor of InfoWorld magazine, wrote that “buyers probably believe the new delivery date for Windows with the same fervor that they believe in Santa Claus.” Then, he got downright nasty: “If you’ve got something to sell, deliver. Otherwise, see to the business of creating the product instead of hawking vaporware.”

If the technology press was annoyed with Microsoft’s constant delays and prevarications, the third parties who had decided or been pressured into supporting Windows were getting even more impatient. One by one, the clone makers who had agreed to ship Windows with their machines backed out of their deals. Third-party software developers, meanwhile, kept getting different versions of the same letter from Microsoft: “We’ve taken the wrong approach, so everything you’ve done you need to trash and start over.” They too started dropping one by one off the Windows bandwagon. The most painful defection of all was that of Lotus, who now reneged on their promise of a Windows version of Lotus 1-2-3. The latter was the most ubiquitous single software product in corporate America, excepting only MS-DOS, and Microsoft had believed that the Windows Lotus 1-2-3 would almost guarantee their new GUI environment’s success. The question now must be whether the lack of same would have the opposite effect.

In January of 1985, Steve Ballmer brought in Microsoft’s fifth Windows project manager: Tandy Trower, a three-year veteran with the company who had recently been managing Microsoft BASIC. Trower was keenly aware of Bill Gates’s displeasure at recent inroads being made into Microsoft’s traditional BASIC-using demographic by a new product called Turbo Pascal, from a new industry player called Borland. The Windows project’s reputation inside Microsoft was such that he initially assumed he was being set up to fail, thereby giving Gates an excuse to fire him. “Nobody wanted to touch Windows,” remembers Trower. “It was like the death project.”

Trower came in just as Scott MacGregor, the Xerox golden boy who had arrived amidst such high expectations a year and a half before, was leaving amidst the ongoing discord and frustration. Ballmer elected to replace MacGregor with… himself as Windows’s chief technical architect. Not only was he eminently unqualified for such a role, but he thus placed Trower in the awkward position of having the same person as both boss and underling.

As it happened, though, there wasn’t a lot of need for new technical architecting. In that respect at least, Trower’s brief was simple. There were to be no new technical or philosophical directions explored, no more debates over the merits of tiled versus overlapping windows or any of the rest. The decisions that had already been made would remain made, for better or for worse. Trower was just to get ‘er done, thereby stemming the deluge of mocking press and keeping Ballmer from having to go on any more humiliating apology tours. He did an admirable job, all things considered, of bringing some sort of coherent project-management methodology to a group of people who desperately needed one.

What could get all too easily lost amidst all the mockery and all very real faults with the Windows project as a functioning business unit was the sheer difficulty of the task of building a GUI environment without abandoning the legacy of MS-DOS. Unlike Apple, Microsoft didn’t enjoy the luxury of starting with a clean slate; they had to keep one foot in the past as well as one in the future. Nor did they enjoy their competitor’s advantage of controlling the hardware on which their GUI environment must run. The open architecture of the IBM PC, combined with a market for clones that was by now absolutely exploding, meant that Microsoft was forced to contend with a crazy quilt of different hardware configurations. All those different video cards, printers, and memory configurations that could go into an MS-DOS machine required Microsoft to provide drivers for them, while all of the popular existing MS-DOS applications had to at the very least be launchable from Windows. Apple, by contrast, had been able to build the GUI environment of their dreams with no need to compromise with what had come before, and had released exactly two Macintosh models to date — models with an architecture so closed that opening their cases required a special screwdriver only available to Authorized Apple Service Providers.

In the face of all the challenges, some thirty programmers under Trower “sweated blood trying to get this thing done,” as one of them later put it. It soon became clear that they weren’t going to make the June 1985 deadline (thus presumably disappointing those among Stewart Alsop’s readers who still believed in Santa Claus). Yet they did manage to move forward in far more orderly fashion than had been seen during all of the previous year. Microsoft was able to bring to the Comdex trade show in May of 1985 a version of Windows which looked far more complete and polished than anything they had shown before, and on June 28, 1985, a  feature-complete “Preview Edition” was sent to many of the outside developers who Microsoft hoped would write applications for the new environment. But the official first commercial release of Windows, known as Windows 1.01, didn’t ship until November of 1985, timed to coincide with that fall’s Comdex show.

In marked contrast to the inescapable presence Windows had been at its first Comdex of two years before, the premiere of an actual shipping version of Windows that November was a strangely subdued affair. But then, the spirit of the times as well was now radically different. In the view of many pundits, the bloom was rather off the rose for GUIs in general. Certainly the GUI-mania of the Fall 1983 Comdex and Apple’s “1984” advertisement now seemed like the distant past. IBM’s pseudo-GUI TopView had already failed, as had Visi On, while the various other GUI products on offer for MS-DOS machines were at best struggling for marketplace acceptance. Even the Macintosh had fallen on hard times, such that many were questioning its very survival. Steve Jobs, the GUI’s foremost evangelist, had been ignominiously booted from Apple the previous summer — rendered, as the conventional wisdom would have it, a has-been at age thirty. Was the GUI itself doomed to suffer the same fate? What, asked the conventional-wisdom spouters, was really so bad about MS-DOS’s blinking command prompt? It was good enough to let corporate America get work done, and that was the important thing. Surely it wouldn’t be Windows, an industry laughingstock for the better part of two years now, that turned all this GUI hostility back in the market’s face. Windows was launching into a headwind fit to sink the Queen Mary.

It was a Microsoft public-relations specialist named Pam Edstrom who devised the perfect way of subverting the skepticism and even ridicule that was bound to accompany the belated launch of the computer industry’s most infamous example of vaporware to date. She did so by stealing a well-worn page from the playbook of media-savvy politicians and celebrities who have found themselves embroiled in controversy. How do you stop people making fun of you? Why, you beat them to the punch by making fun of yourself first.

Edstrom invited everybody who was anybody in technology to a “Microsoft Roast” that Comdex. The columnist John C. Dvorak became master of ceremonies, doing a credible job with a comedic monologue to open the affair. (Sample joke about the prematurely bald Ballmer: “When Windows was first announced, Ballmer still had hair!”) Gates and Ballmer themselves then took the stage, where Stewart Alsop presented them with an InfoWorld “Golden Vaporware Award.” The two main men of Microsoft then launched into a comedy routine of their own that was only occasionally cringe-worthy, playing on their established reputations as the software industry’s enfant terrible and his toothy-but-not-overly-bright guard dog. Gates said that Ballmer had wanted to cut features: “He came up with this idea that we could rename this thing Microsoft Window; we would have shipped that a long time ago.” Ballmer told how Gates had ordered him to “ship this thing before the snow falls, or you’ll end your career here doing Windows!”; the joke here was that in Seattle, where the two lived and worked, snow almost never falls. Come the finale, they sang the “The Impossible Dream” together as a giant shopping cart containing the first 500 boxed copies of Windows rolled onto the stage amidst billows of dry ice.

All told, it was a rare display of self-deprecating humanity and showmanship from two people not much known for either. From a PR perspective, it was about the best lemonade Microsoft could possibly have made out of a lemon of a situation. The press was charmed enough to start writing about Windows in more cautiously positive terms than they had in a long, long time. “The future of integration [can] be perceived through Windows,” wrote PC World. Meanwhile Jim Seymour, another respected pundit, wrote a column for the next issue of PC Week that perfectly parroted the message Microsoft was trying to get across:

I am a Windows fan, not because of what it is today but what it almost certainly will become. I think developers who don’t build Windows compatibility into new products and new releases of successful products are crazy. The secret of Windows in its present state is how much it offers program developers. They don’t have to write screen drivers [or] printer drivers; they can offer their customers a kind of two-bit concurrency and data exchange.

The most telling aspect of even the most sympathetic early reviews is their future orientation; they emphasize always what Windows will become, not what it is. Because what Windows actually was in November of 1985 was something highly problematic if not utterly superfluous.

The litany of problems began with that same old GUI bugaboo: performance. Two years before, Bill Gates had promised an environment that would run on any IBM PC or clone with at least 192 K of memory. Technically speaking, Microsoft had come very close to meeting that target: Windows 1.01 would run even on the original IBM PC from 1981, as long as it had at least 256 K of memory. It didn’t even absolutely require a hard drive. But running and running well — or, perhaps better put, running usably — were two very different matters. Windows could run on a floppy-based system, noted PC Magazine dryly, “in the same sense that you can bail a swimming pool dry with a teaspoon.” To have a system that wasn’t so excruciatingly slow as to outweigh any possible benefit it might deliver, you really needed a hard drive, 640 K or more of memory, and an 80286 processor like that found in the IBM PC/AT. Even on a hot-rod machine like this, Windows was far from snappy. “Most people will say that any screen refresh that can be watched takes too long,” wrote PC Magazine. “Very little happens too quickly to see in Windows.” One of Microsoft’s own Windows programmers would later offer a still more candid assessment: even at this late date, he would say, “Windows was a pig,” the result of a project that had passed through too many hands and had too many square chunks of code hammered into too many round holes.

Subjectively, Windows felt like it had been designed and programmed by a group of people who had read a whole lot about the Macintosh but never actually seen or used one. “I use a Macintosh enough to know what a mouse-based point-and-click interface should feel like,” wrote John C. Dvorak after the goodwill engendered by the Microsoft Roast had faded. “Go play with a Mac and you’ll see what I mean. Windows is clunky by comparison. Very clunky.” This reputation for endemic clunkiness — for being a Chrysler minivan pitted against Apple’s fine-tuned Porsche of a GUI — would continue to dog Windows for decades to come. In this first release, it was driven home most of all by the weird and unsatisfying system of “tiled” windows.

All of which was a shame because in certain ways Windows was actually far more technically ambitious than the contemporary Macintosh. It offered a cooperative-multitasking system that, if not quite the preemptive multitasking of TopView or the new Commodore Amiga, was more than the single-tasking Mac could boast. And it also offered a virtual-memory scheme which let the user run more applications than would fit inside 640 K. Additional RAM beyond the 640 K barrier or a hard drive, if either or both were extant, could be used as a swap space when the user tried to open more applications than there was room for in conventional memory. Windows would then automatically copy data back and forth between main memory and the swap space as needed in order to keep things running. The user was thus freed from having to constantly worry about her memory usage, as she did in TopView — although performance problems quickly started to rear their head if she went too crazy. In that circumstance, “the thrashing as Windows alternately loads one application and then the other brings the machine to its knees,” wrote PC Magazine, describing another trait destined to remain a Windows totem for years to come.

A Quick Tour of Windows 1.01


Windows 1.01 boots into what it calls the “MS-DOS Executive,” which resembles one of the many popular aftermarket file managers of the MS-DOS era, such as Norton Commander. Applications are started from here by double-clicking on their actual .exe files. This version of Windows does nothing to insulate the users from the file-level contents of their hard drives; it has no icons representing installed applications and, indeed, no concept of installation at all. Using Windows 1.01 is thus akin to using Windows 10 if the Start Menu, Taskbar, Quick-Launch Toolbar, etc. didn’t exist, and all interactions happened at the level of File Explorer windows.

In a sense, the MS-DOS Executive is Windows. Closing it serves as the shutdown command.

Under Microsoft’s “tiled windows” approach, windows always fill the width of the screen but can be tiled vertically. They’re never allowed to overlap one another under any circumstances, and taken as a group will always fill the screen. One window, the MS-DOS Executive will always be open and thus filling the screen even if nothing else is running. There is no concept of a desktop “beneath” the windows.

Windows can be sized to suit in vertical terms by grabbing the widget at their top right and dragging. Here we’re making the MS-DOS Executive window larger. When we release the mouse button, the Clock window will automatically be made smaller in proportion to its companion’s growth. Remember, overlapping windows aren’t allowed, no matter how hard you try to trick the software…

…with one exception. Sub-windows opened by applications can be dragged freely around the screen and can overlay other windows. Go figure!

If we try to drag a window around by its title bar, an interesting philosophical distinction is revealed between Windows 1.01 and more recent versions. We wind up swapping the contents of one window with those of another. Applications, in other words, aren’t intrinsically bound to their windows, but can be moved among them. In the screenshot above, the disk icon is actually our mouse cursor, representing the MS-DOS Executive window’s contents, which we’re about to swap with the contents of what is currently the Clock window.

Windows 1.01 shipped with Write, a fairly impressive minimalist word processor — arguably the most impressive application ever made for the little-used operating environment.

In contrast to the weirdness of other aspects of Windows 1.01, working within an application like Write feels reassuringly familiar, what with its scroll bars and Macintosh-like pull-down menus. Interestingly, the latter use the click-and-hold approach of the Mac rather than the click-once approach of later versions of Windows.

Windows 1.01 doesn’t have a great way of getting around the 640 K barrier, but it does implement a virtual-memory scheme — no mean feat in itself on a processor without built-in memory protection — which uses any memory beyond 640 K as essentially a RAM disk — or, as Microsoft called it, a “Smart Drive.” In the absence of extra memory, or if it too is filled up, the hard disk becomes the swap area.

By the time Windows was ready, all of the clone makers whom Bill Gates had cajoled and threatened into shipping it with their computers had jumped off the bandwagon, telling him that it had simply taken him too long to deliver, and that the product which he had finally delivered was simply too slow on most hardware for them to foist it on their customers in good conscience. With that path to acceptance closed to them, Microsoft was forced to push Windows as a boxed add-on sold through retail channels, a first for them in the context of a piece of system software. In a measure of just how badly Gates wanted Windows to succeed, Microsoft elected to price it at only $99 — one-tenth of what VisiCorp had tried to ask for Visi On two years before — despite its huge development cost.

Unfortunately, the performance problems, the awkwardness of the tiled windows, and the almost complete lack of native Windows applications beyond those that shipped with the environment outweighed the low price; almost nobody bought the thing. Microsoft was trapped by the old chicken-or-the-egg conundrum that comes with the launch of any new computing platform — a problem that is solved only with difficulty in even the best circumstances. Buyers wanted to see Windows applications before they bought the operating environment, while software developers wanted to see a market full of eager buyers before they invested in the platform. The fact that Windows could run most vanilla MS-DOS applications with some degree or another of felicity only helped the software developers make the decision to stay away unless and until the market started screaming for Windows-native versions of their products. Thus, the MS-DOS compatibility Microsoft had built into Windows, which had been intended as a mere bridge to the Windows-native world of the future, proved something of a double-edged sword.

When you add up all of the hard realities, it comes as little surprise that Microsoft’s first GUI sparked a brief run of favorable press notices, a somewhat longer run of more skeptical commentary, and then disappeared without a trace. Already by the spring of 1986, it was a non-factor, appearing for all the world to be just one more gravestone in the GUI graveyard, likely to be remembered only as a pundit’s punch line. Bill Gates could comfort himself only with the fact that IBM’s own big system-software innovation had landed with a similar splat.

IBM and Microsoft had each tried to go it alone, had each tried to build something better upon the foundation of MS-DOS, and had each struck out swinging. What now? Perhaps the odd couple still needed one another, loathe though either was to admit it. In fact, by that spring of 1986 a gradual rapprochement had already been underway for a year, despite deep misgivings from both parties. TopView and Windows 1 had both been a bust, but neither company had gotten where they were by giving up easily. If they pooled their forces once again, who knew what they might achieve. After all, it had worked out pretty well the first time around.

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, Computer Wars: The Fall of IBM and the Future of Global Technology by Charles H. Ferguson and Charles R. Morris, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC Magazine of April 30 1985, February 25 1986, April 18 1987, and April 12 1988; Byte of February 1985, May 1988, and the special issue of Fall 1985; InfoWorld of May 7 1984 and November 19 1984; PC World of December 1985; Tandy Trower’s “The Secret Origins of Windows” on the Technologizer website. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)


  1. This is perhaps a good point to introduce a quick primer on multitasking techniques to those of you who may not be familiar with its vagaries. The first thing to understand is that multitasking during this period was fundamentally an illusion. The CPUs in the computers of this era were actually only capable of doing one task at a time. Multitasking was the art of switching the CPU’s attention between tasks quickly enough that several things seemed to be happening at once — that several applications seemed to be running at once. There are two basic approaches to creating this illusionary but hugely useful form of multitasking.

    Cooperative multitasking — found in systems like the Apple Lisa, the Apple Macintosh between 1987’s System 5 and the introduction of OS X in 2001, and early versions of Microsoft Windows — is so named because it relies on the cooperation of the applications themselves. A well-behaved, well-programmed application is expected to periodically relinquish its control of the computer voluntarily to the operating system, which can then see if any of its own tasks need to be completed or any other applications have something to do. A cooperative-multitasking operating system is easier to program and less resource-intensive than the alternative, but its most important drawback is made clear to the user as soon as she tries to use an application that isn’t terribly well-behaved or well-programmed. In particular, an application that goes into an infinite loop of some sort — a very common sort of bug — will lock up the whole computer, bringing the whole operating system down with it.

    Preemptive multitasking — found in the Commodore Amiga, Mac OS X, Unix and Linux, and later versions of Microsoft Windows — is so named because it gives the operating system the authority to wrest control from — to preempt — individual applications. Thus even a looping program can only slow down the system as a whole, not kill it entirely. For this reason, it’s by far the more desirable approach to multitasking, but also the more complicated to implement. 

Strand Games

Introducing KL: Part 1, "L"

by hugh at July 06, 2018 12:29 AM

What is KL?

KL stands for "knowledge language", and is a new domain specific programming language for interactive fiction developed at Strand.

Broadly speaking, KL can be split into the "K" part and the "L" part; with the "L" part a general purpose programming language, and the "K" part, specific features for manipulating knowledge structures and the representation of general purpose knowledge.

It was understood from the start that any domain specific language that does not also...

July 03, 2018

Emily Short

Postmortems (Raph Koster)

by Emily Short at July 03, 2018 09:40 PM

postmortemsRaph Koster’s Postmortems is a series of essays and talks about his work. That work includes online RPGs and MUDs, including some with a story focus perhaps relevant to people on this blog. (Actually, this book is just Volume One, with more volumes to come — but accordingly, it speaks about some of Koster’s earliest work, which is the material that probably dovetails the most with the interests of IF enthusiasts.)

Koster offers an introduction to MUDs that launches from Adventure, but explains the differences about playing such a game with others. There’s a good bit of design narrative and history here about those games — which may well be interesting to readers of this blog, as they’re adjacent to IF. I especially enjoyed reading the (plentiful) examples of MUD scripting, for comparison with how early IF languages worked. There are also detailed descriptions of quests and experiences that would now be difficult or impossible to recapture, such as a “Beowulf” quest from LegendMUD.

I found some of these passages a little dizzying, in a good way: they offered me a glance at an alternate universe of text-based, narrative-studded games, ones that are rarely discussed in the context of the IF canon. By which I mean: I probably should have known about a lot of this all along. (But there are so many things I should have known all along.)

At any rate, I recommend it for people who are interested in the history of games-adjacent-to-IF.

*

Postmortems is also a book about what it’s like to be in a games career, to care about and love games, to think about and with games. The first essay is about Koster’s childhood game writing, as a kid in Peru, and how he grew up from there. It’s illustrated with sketches from the game concepts of his youth. He writes about games he wrote as gifts and as messages to people close to him: another practice I value.

Because the book is drawing from such diverse sources — talks, written work, pieces created as retrospectives and other pieces written at the same time as the games themselves, some articles that include sample code and others meant for very non-technical audiences — it’s quite a varied read. But that is also part of the book’s charm.

*

I’ve written about Raph’s Theory of Fun for Game Design in the past.

Disclosure: I received a free PDF advance review copy of this book for the purposes of coverage.

sub-Q Magazine

July 2018 – Table of Contents

by Stewart C Baker at July 03, 2018 01:41 PM

Our July game, the Hidden King by dcsross, presents us with a picture of madness, as the unnamed narrator struggles to navigate the shifting realities of a schizoaffective disorder.

Also, there may be werewolves.

Here’s our schedule for July:

July 10th – Bruno Dias brings us part two of a primer in narrative design for writers.  (Read part 1 now)

July 17th – Our story of the month, The Hidden King, by dcsross. (If you’re a Patreon supporter or subscriber, you can get access to this already!)

July 24th – An interview with our July author, dcsross.

The post July 2018 – Table of Contents appeared first on sub-Q Magazine.

July 02, 2018

Wade's Important Astrolab

Time Warp extension rejigged, CC licence added

by Wade ([email protected]) at July 02, 2018 05:41 AM

Thanks to a suggestion by Robert Patten, I've put a proper Creative Commons licence in my Time Warp Inform extension's documentation. This way, anyone who wants to add it to their project knows where they stand. I'm using the most accommodating licence that exists, the Attribution 4.0 International (CC BY 4.0) licence:

"Summarily this licence allows users to distribute, remix and build upon a work, and create Derivative Works – even for commercial use – provided they credit the original creator/s (and any other nominated parties)." (full details in the extension docs)

I also polished the extension itself, and the demo project, so I'm calling them both version 2.

Time Warp runs in Glulx projects and Z8 projects, and has been tested in all Informs from 6G60 to 6M62 (which is the current one as I type this). The code is simple so I expect it to not break easily, to be easy to fix if it does in future, and also that it might work in older versions of Inform 7, though I offer no guarantees.

If you don't know what I'm talking about because you missed my Time Warp plug post, click here to go to it. Basically, Time Warp could be an easter egg in your game! And it requires almost zilch effort to add it in. The demo project shows you how and the extension itself explains how. Click here to try or download Time Warp / the demo project / the extension.

Time Warp's thrilling title screen. There are still more intrigues here than in the court of the Medicis.

July 01, 2018

Post Position

“Bullet” and Poem without Suffering

by Nick Montfort at July 01, 2018 06:51 PM

A bulletDiscussed in this review: “Bullet,” David Byrne, American Utopia, Nonesuch, 2018; Poem without Suffering, Josef Kaplan, Wonder Books, 2015

David Byrne’s earworm takes a distant yet close perspective, describing a bullet’s fatal encounter with a human body. Did he know about Kaplan’s similar short, rapid, book-length poem? Byrne’s song sets its sights on an adult man, Kaplan’s poem on a child. The life of the child is hinted by describing what a warm maternal relationship is like, and by mentioning injuries from falling off a bunk bed and being hit by a baseball. We hear about the man’s life because of what the bullet cuts through: “Skin that women had touched,” “Many fine meals he tasted there,” “his heart with thoughts of you.” The general description is very effective. There are striking metaphors — positive associations — for the bullet itself, also. In Poem, it is a triumphant runner (such as Usain Bolt, who bears the name of a crossbow’s projectile) dragging gore from the body as if it were a trophy or banner. In “Bullet,” it is “Like an old grey dog / On a fox’s trail.” Perhaps America’s reliable old dog cannot be taught new tricks.

American Utopia · Poem without Suffering

June 30, 2018

Emily Short

End of June Link Assortment

by Emily Short at June 30, 2018 03:40 PM

The July 4 meeting of the Oxford/London IF Meetup will feature Leigh Alexander presenting on the narrative design process of Reigns: Her Majesty. We will start the session by playing through a bit of the game, so please do feel free to come even if you’re not familiar with it.

(And can I just say how pleased I am with this method of celebrating Independence Day, by hanging out in London with other expats playing a game about being a queen.)

July 6 is the deadline to send your intent to participate in Cragne Manor, an Anchorhead tribute game with rooms written by different authors, and organized by Ryan Veeder and Jenni Polodna. There’s more about this project on the intfiction forum. Contributions are to consist either of Inform 7 code or of detailed specifications for something to be rendered in Inform 7.

Collaboratively authored IF projects have a long history, from Shades of Gray to Coke is It! and the Textfire 12-pack to IF Whispers and Alabaster to the Apollo 18 tribute album. This particular one sounds like it strikes a nice balance of organization and chaos, so I’m looking forward to seeing where it goes.

July 7 is the next Meetup of the SF Bay IF group.

July 19 is the next meeting of the Boston/Cambridge PR-IF.

July 21st, 3-5 p.m. at Mad City Coffee in Columbia, the Baltimore/DC group meets to discuss The Wand.

July 31st in Canterbury (UK) there is a session on how to build escape rooms for libraries.

Gothic Novel Jam is a jam for games or works inspired by the gothic novel in any fashion, and is running throughout July. IF and related narrative games are welcome.

Screen Shot 2018-06-10 at 4.20.43 PM.pngIntroComp is now accepting intents to enter. IntroComp is a competition in which you can submit just an excerpt of an unfinished interactive fiction game, and receive feedback from players about what they liked or didn’t like about it. If you’d like to participate as an author, register with the site immediately (this closes June 30, so today). Games themselves must be submitted by July 31 and judging will occur during August.

Now through August 2, the Future of Storytelling Prize is seeking submissions of interactive storytelling work. The prize is $10,000 and exhibition at the FoST Summit in October. There are two other things I feel I should mention here:

  1. I would not expect this prize to go to something primarily text-based.
  2. One of the categories of prize is for “works that foster empathy” (okay!), funded by the Charles Koch Foundation (…hm.) Considering the Koch brothers’ political involvement, I myself would want to look into that point a little further if I were considering applying for the prize, but your mileage may vary. So, FYI, reader, this is a thing that exists, and those are the connections that go with it.

Screen Shot 2018-06-10 at 4.21.24 PM.png

This is well in advance, but November 10-11, AdventureX will return, this time at the British Library. AdventureX is a conference focused on narrative rich games, whether those are mobile or desktop, text-based or graphical; it’s grown significantly in size and professionalism over the last couple of years, and last year pretty definitively outgrew its previous venue. The way to get tickets in advance is to back the Kickstarter [ETA: The Kickstarter places are all sold, but see the comments on other ways to get in.]

Releases

rentavice.pngRent-a-Vice is a Choice of Games story by Natalia Theodoridou, also known for her shorter interactive fiction published on Sub-Q. Described as “cyberpunk-noir mystery”:

You’re a private investigator with a bad habit, an ex, and mountains of debt–troubles so deep that you stand to lose custody of your kid. When a mysterious client asks for your help finding their missing lover in the seamy world of virtual experience, it’s up to you to gather evidence, experience the technology for yourself, and solve the case.

There’s an author interview here as well.

*

Porpentine has a new interview and short story available at Living Content.

*

It Will Be Hard is an interactive graphic novel about a queer couple working to maintain a relationship despite having incompatible sexualities.

*

Perfectly Ordinary Ghosts is a Twine story by Victoria Smith, recently released on itch.io:

Perfectly Ordinary Ghosts is a domestic horror interactive fiction.

It is a real house made imaginary.

In summer the ghosts come, and the violet orange of the humid night reaches new depths.

*

jurassic.pngYour Story Interactive has a new piece available for iOS and Android:

Survivor: JURASSIC KINGDOM is a “choose your own adventure” game where you are to show your survival skills in the most dangerous place on the planet – the wild jungles of the Amazon. Every choice that you make changes the course of your adventure.

Their previous work includes Zarya-1 and Sails in the Fog.

*

Raph Koster’s Postmortems is out: a series of essays about his work, starting with text MUD days. (I have a more detailed post about Postmortems scheduled for a few days from now, but it’s available now, so I thought I’d mention it.)

*

Adjacent contains an excerpt from Allison Parrish’s Articulations, a book of poetry made by machine; it is a book that, in Parrish’s account, “investigates how similarity (whether phonetic, syntactic or semantic) is used to create a sense of cohesion in poetic language.” You can find out more about how this was done, including access to some of the source, at this article; but what I want to highlight here instead is an example of the generated text, which, while nonsensical in a way, has a cadence and variety one does not usually associate with generated text, and often feels like it approaches meaning.

Thunder, lightning, fire and rain, and laughter, and inn-fires. After the fire of London and another of a pastoral vein: — the venerable original the adolescent and the venerable, richly the upland and the vale adorn, buddha, the holy and benevolent, of many a lover, who the heaven would think who have made me thus unworthy of a name or maybe there, like many another one and many a worthy dame.

And many a wonder seen: many and many a winter day and many a winter day on many a windmill turning in many a winding loose meander, in many a wood and many a field, with many a pull and many a fee, with many and many a call; with many a halt and many a call, with many a sigh and many a tear, many a sigh, and many a tear, many and many a year.

*

Meanwhile, if you enjoy procedurally generated Tarot, you might also be interested in this deck generator from @watawatabou that procedurally creates abstract artwork for each card.

*

Priscilla Snow has announced Memory Blocks, an IF anthology coming out in September.

Podcasting

Microphone GAIN presents a series of conversations between people in the games space who organize events — whether those are conferences, game jams, exhibitions, meetups, classes, or something else.

Most recently, Sagan Yee (Hand Eye Society) talked with Alex Schearer (Seattle Indies) about how to provide a more welcoming space for underrepresented groups  — something very much on my mind about my own meetup organizing. Also in the series is a talk between Tanya X. Short (Pixelles, Kitfox) and Ben Esposito (Glitch City) about replacing key people and managing succession in volunteer / non-profit organizations.

Articles

I really enjoyed this blog post from S. John Ross about his process for making maps and plans of fantasy spaces.

Here’s a long piece about exhibiting video games in museums, and the challenges that go into curating a good exhibit of that kind.

Crowdfunding

This is not IF-related particularly, but the Edible Games Cookbook is too curious not to mention.

June 29, 2018

The Digital Antiquarian

Doing Windows, Part 2: From Interface Manager to Windows

by Jimmy Maher at June 29, 2018 11:41 AM

Bill Gates was as aware as everyone else of the abundant deficiencies of his own company’s hastily procured operating system for the IBM PC. So, in September of 1981, before the PC had even shipped and just a handful of months after VisiCorp had started their own similar project, he initiated work at Microsoft on a remedy for MS-DOS’s shortcomings. Initially called the “Interface Manager,” it marks the start of a long, fraught tale of struggle and woe that would finally culminate in the operating system still found on hundreds of millions of computers today.

As the name would imply, the Interface Manager was envisioned first and foremost as a way to make computing easier for ordinary people, a graphical layer to sit atop MS-DOS and insulate them from the vagaries of the command line. As such, it was the logical follow-on to an even older project inside Microsoft with similar goals, another whose distant descendant is still ubiquitous today: Microsoft Multiplan, the forefather of Excel.

In those days, people who had worked at the already legendary Xerox Palo Alto Research Center were traded around the computer industry like the scarce and precious commodity they were, markers of status for anyone who could get their hands on one of them. Thus it could only be regarded as something of a coup when Charles Simonyi came to work for Microsoft on February 6, 1981, after almost a decade spent at PARC. There he had been responsible for a word processor known as Bravo, the very first in history to implement the “what you see is what you get” philosophy — meaning that the text you saw on the monitor screen looked exactly like what would be produced by the printer. When the 32-year-old Hungarian immigrant, debonair and refined, showed his secretary at PARC a snapshot of his soon-to-be boss Bill Gates, 25-going-on-15 and looking like he could really use a shower and a haircut, she nearly fell out of her chair laughing: “Charles, what are you doing? Here you are at the best research lab in the world!” What could he say? A rapidly changing industry could make for strange bedfellows. Simonyi became Microsoft’s First Director of Applications Development.

At Microsoft, he found the Multiplan project, an attempt to make a competitor to VisiCalc, already underway. He pushed hard to turn it into not just another spreadsheet but a different kind of spreadsheet, placing a premium on ease of use in a field of business software already becoming known for its crypticness. For him, ease of use meant augmenting the long lists of command keystrokes with a menu of possibilities that would always be at the user’s fingertips. Simonyi:

I like the obvious analogy of a restaurant. Let’s say I go to a French restaurant and I don’t speak the language. It’s a strange environment and I’m apprehensive. I’m afraid of making a fool of myself, so I’m kind of tense. Then a very imposing waiter comes over and starts addressing me in French. Suddenly, I’ve got clammy hands. What’s the way out?

The way out is that I get the menu and point at something on the menu. I cannot go wrong. I may not get what I want — I might end up with snails — but at least I won’t be embarrassed.

But imagine if you had a French restaurant without a menu. That would be terrible.

It’s the same thing with computer programs. You’ve got to have a menu. Menus are friendly because people know what their options are, and they can select an option just by pointing. They do not have to look for something that they will not be able to find, and they don’t have to type some command that might be wrong.

It’s true that Multiplan’s implementation of menus was a long way from what a modern GUI user might expect to see. For one thing, they were lined up at the bottom rather than the top of the screen. (It would take software makers a surprisingly long time to settle on the topside placement we know today, as evidenced by the menus we saw at the bottom of Visi On’s windows as well in my previous article.) More generally, much of what Simonyi had been able to implement in Bravo on the graphical terminals at Xerox PARC way back in the mid-1970s was impossible on an IBM PC running Multiplan in the early 1980s, thanks to the lack of a mouse and a restriction to text-only display modes. One could only do what one could with the tools to hand — and by that standard, it must be said, Microsoft Multiplan was a pretty good first effort.

Multiplan was released in 1982. Designed to run inside as little as 64 K of memory and ported to several platforms (including even the humble Commodore 64), it struggled to compete with Lotus 1-2-3, which was designed from the start for an IBM PC with at least 256 K. The Lotus product would come to monopolize the spreadsheet market to the tune of an 80-percent share and sales of 5 million copies by the end of the 1980s, while Multiplan would do… rather less well. Still, the general philosophy that would guide Microsoft’s future efforts was there. Their software would distinguish itself by being approachable for the average person. Sometimes this would yield great results, other times it would come off more as a condescending caricature of user-friendliness, but it’s the philosophy that still guides Microsoft’s consumer software to this day.

Here we see Microsoft Multiplan in action. Note the two rows of menus along the bottom of the screen; this counted as hugely user-friendly circa 1982.

Charles Simonyi left an even bigger mark upon Microsoft’s next important application. Like Multiplan, Multi-Tool Word attempted to compete with the leading application of its type primarily on the basis of ease of use. This time, however, the application type in question was the word processor, and the specific application in question was WordStar, a product which was so successful that its publisher, MicroPro International, had gross sales that exceeded Microsoft’s as late as 1983. Determined to recreate what he had wrought at Xerox PARC more exactly than had been possible with Multiplan, a project he had come into in the middle, Simonyi convinced Microsoft to make a mouse just for the new word processor. (“The mouse,” InfoWorld magazine had to explain, “is a pointing device that is designed to roll on the desktop next to the keyboard of a personal computer.”)

The very first Microsoft mouse, which retailed for $195 in 1983.

Debuting in May of 1983, in many ways Multi-Tool Word was the forerunner of the operating environment that would come to be known as Microsoft Windows, albeit in the form of one self-contained application. Certainly most of the touted advantages to a GUI environment were in place. It implemented windows, allowing multiple documents to be open simultaneously within them; it utilized the mouse if anything more elegantly than the full-blown GUI environment Visi On would upon its debut six months later; it could run in graphical mode, allowing it to display documents just as they would later appear on the printer; it did its best to duplicate the interface of Multiplan, on the assumption that a user shouldn’t be expected to relearn the most basic interface concepts every time she needs to use a new application; it had an undo command to let the user walk back her mistakes. Unfortunately, it was also, like most early GUI experiments, slow in comparison to more traditional software, and it lacked such essential features as a spell checker and a mailing-list manager. Like Multiplan, it would have a hard time breaking through in one of the most competitive segments of the business-software market, one which was dominated first by the more powerful WordStar and then by the still more power-user-friendly WordPerfect. But, once again, it gave a glimpse of the future of computing as Microsoft envisioned it.

Multi-Tool Word. Here someone is using the mouse to create a text style. Note the WYSIWYG text displayed above.

Even as these applications were being developed at Microsoft, work on the Interface Manager, the software designed to integrate all of their interface enhancements and more into a non-application-specific operating environment, was continuing at its own pace. As usual with such projects, the Interface Manager wound up encompassing far more than just a new interface. Among other requirements, Gates had stated that it had to introduce a system of drivers to insulate applications from the hardware, and that it had to expose a toolkit to application programmers that was far larger and richer than MS-DOS’s 27 bare-bones function calls. Such a toolkit would allow programmers to make diverse applications with a uniform look and feel, thus delivering on another of the GUI’s most welcome promises.

This is one of a series of screenshots, published in the December 1983 issue of Byte Magazine, which together may represent the oldest extant evidence of Microsoft Windows’s early appearance. Note in particular the menus at the bottom of the screen. Oddly, a much more mature version of Windows, with menus at the top of the individual windows, was demonstrated at the Comdex trade show which began on November 23, 1983. Despite the magazine’s cover date, one therefore has to assume that these screenshots are older — probably considerably older, given how dramatic the differences between the Windows demonstrated at Comdex and the one we see here really are.

In early 1983, Bill Gates and a few colleagues met with IBM to show them their Interface Manager in progress. They had expected a thrilled reception, expected IBM to immediately embrace it as the logical next stage in the two companies’ partnership. What they got was something much different. “They thought it was neat stuff,” recalls Gates’s right-hand man Steve Ballmer, “but they said, ‘We have this other thing we are pretty excited about.'” IBM, it seemed, must be working on an extension to MS-DOS of their own. This unsatisfying and, from Microsoft’s perspective, vaguely alarming meeting heralded the beginning of a new, far less trusting phase in the two companies’ relationship. The unlikely friendship between the young and freewheeling Microsoft and the middle-aged and staid IBM had spawned the IBM PC, a defining success for both companies. Now, though, it was entering a much more prickly phase.

IBM had been happy to treat this scruffy kid named Bill Gates almost as an equal partner as long as their first general-purpose microcomputer remained nothing more than a marketplace experiment. Now, though, with the IBM PC the first bullet item on their stock reports, the one exploding part of an otherwise fairly stagnant business, they were beginning to wonder what they had wrought when they signed that generous deal to merely license MS-DOS from Microsoft rather than buy it outright. Gates had already made it clear that he would happily license the same operating system to others; this, combined with the open architecture and easy-to-duplicate commodity hardware of the IBM PC itself, was allowing the first of what would soon be known as the “PC clones” to enter the market, undercutting IBM’s prices. IBM saw this development, for understandable reasons, as a potential existential threat to the one truly exciting part of their business, and they weren’t at all sure whose side Microsoft was really on. The two partners were bound together in a hopeless tangle of contracts and mutual dependencies that could quite possibly never be fully severed. Still, there wasn’t, thought IBM, any point in getting themselves yet more entangled. From here on, then, IBM and Microsoft’s relationship would live in an uncertain no man’s land somewhere between partners and competitors — a situation destined to have major implications for the quest to replace MS-DOS with something better.

IBM’s suspicions about Microsoft were probably at least partly justified — Bill Gates’s reputation as a shark whom you trusted at your peril was by no means unearned — but undoubtedly became something of a self-fulfilling prophecy as well. Suddenly aware of the prospect of a showdown between their Interface Manager and whatever IBM was playing so close to the vest, Microsoft began reaching out to the emerging clone makers — to names like Compaq, Zenith, and Tandy — in a far more concerted way. If matters should indeed end in a showdown, these could be the bridges which would allow their system software rather than IBM’s to remain the standard in corporate America.

As if all this wasn’t creating concern enough inside Microsoft and IBM alike, there was also the question of what to make of the Apple Lisa, which had been announced in January of 1983 and would ship in June. The much-heralded first personal computer designed from the ground up for the GUI paradigm had a lot of problems when you looked below the surface. For one thing, it was far too expensive for even the everyday corporate market, what with its price tag of over $10,000. And it suffered from a bad case of over-ambition on the part of its software architects, who had decided to ask its 5 MHz Motorola 68000 processor to run a highly sophisticated operating system sporting virtual memory and cooperative multitasking. The inevitable result was that the thing was slow. A popular knock-knock joke inside the computer industry followed the “Who’s there?” with a fifteen-second pause before a “Lisa” finally came forth. If someone was going to pay over $10,000 for a personal computer, everyone agreed, she was justified in expecting it to run like a Ferrari rather than a Volkswagen bus.

The Lisa GUI, looking and working pretty much the way we still expect such things to look and work today.

When you looked beyond the pricing and performance problems, however, the Lisa was… well, the Lisa was amazing. Apple’s engineering team had figured this whole GUI thing out in a way that no one, not even the demigods at Xerox PARC, had managed to do before. The greatest testament to Apple’s genius today is just how normal the Lisa interface still looks, how easily one can imagine oneself just sitting down and getting to work using it. (Try saying that about any other unfamiliar operating system of this period!) All the stuff we expect is present, working as we expect it to: draggable windows with scroll bars on the side and sizing widgets attached to the corners; pull-down menus up there at the top of the screen; a desktop to function as the user’s immediate workspace; icons representing disks, files, and applications which can be dragged from place to place or even thrown in the trash can; drag-and-drop and copy-and-paste. Parts of all this had appeared before in other products, such as the Xerox Star, but never before had it all come together like this. After the Lisa, further refinements of the GUI would all be details; the really essential, really important pieces were all in place. It instantly made all of the industry’s many other GUI projects, including Microsoft’s, look hopelessly clunky.

Thanks not least to that $10,000 price tag, the Lisa itself was doomed to be a commercial failure. But Apple was promising a new machine for 1984, one which would be cheaper and would employ the same interface without the speed-sapping virtual memory and multitasking. For obvious reasons, the prospect of this next Apple computer, to be called the Macintosh, made plenty of people in the MS-DOS world, among them Bill Gates, very nervous.

One can view much of the history of the personal computer in the United States through the shifting profiles of Bill Gates and Steve Jobs, those two personalities who will always be most identified with a whole era of technology in the public imagination. Just a few years hence from 1983, Jobs would be widely viewed as a has-been in his early thirties, a flighty hippie whom the adults who were now running Apple had wisely jettisoned; Gates, on the other hand, would be a darling of Wall Street well on the way to his reputation as the computer industry’s all-powerful Darth Vader. In 1983, however, the picture was very different. Jobs was still basking in the glory of having been one half — and by far the most charismatic half at that — of the pair of dreamers who had supposedly invented the personal computer in that famous Cupertino garage of theirs, while Gates was the obscure head of a rather faceless company whose importance was understood only by industry insiders. None could foresee the utter domination of virtually all personal computing that would soon be within Gates’s grasp. He was still balanced on the divide between his old way of doing business, as the head of an equal-opportunity purveyor of programming languages and other software to whoever cared to pay for them, and his new, as the supreme leader in the cause of one platform to rule them all under the banner of Microsoft.

This list of the top software companies of 1983 provides a fascinating snapshot of an industry in rapid transition. VisiCorp, which would easily have topped the list in any of the three previous years, has fallen back to number 5, already a spent force. Lotus, the spreadsheet-making rival responsible for their downfall, will take over the top spot in 1984 and remain there through 1986. The biggest company of all this year is the now-forgotten MicroPro, maker of WordStar, the most popular early word processor; they will be wiped out by WordPerfect, which doesn’t even make this list yet, within a year or two. Finally, note the number of home- and entertainment-software publishers which manage to sneak onto the bottom half of this list. In years to come, the business-software market will continue to explode so dramatically in contrast to a comparatively slow-growing home-computing software market as to make that a thing of the past.

So, Jobs still had the edge on Gates in lots of ways in 1983, and he wasn’t afraid to let him know. He expected Microsoft to support the Macintosh in the form of application software. Specifically, he expected them to provide a spreadsheet, a business-graphics application, and a database; they’d signed a contract to do so, and been provided with their first extremely crude prototype of the new machine in return, back in January of 1982. According to Mike Murray, the Mac’s first marketing manager, Jobs would call Gates up and hector him in a way that no one would have dared talk to the Bill Gates of just a few years later: “You get down here right now. I don’t care what your schedule says. I want you down here tomorrow morning at 8:30 and I want you to tell me exactly what you’re doing [for the Macintosh] at Microsoft.”

For his part, Gates was willing to play the role of Jobs’s good junior partner, just he had played the same role so dutifully for IBM, but he never lost sight of the big picture. The fact was that when it came to business sense, the young Bill Gates was miles ahead of the young Steve Jobs. One can’t help but imagine him smiling to himself when Jobs lectured him on how he should forget about MS-DOS and the rest of the system-software business, how application software was where the money was really at. Gates knew something which Jobs had apparently yet to realize: if you control the operating system on people’s computers, you can potentially control everything.

Still, Jobs was aware enough of business realities to see an environment like the Interface Manager, available on commodity clone hardware much cheaper than the Macintosh, as a significant threat. He reminded Gates pointedly of language in the January 1982 contract between the two companies which prohibited Microsoft from using knowledge gained of the Macintosh in competing products for other platforms. Gates respectfully but firmly held his ground, not even precisely denying that insights gained from the Macintosh might find their way into the Interface Manager but rather saying that the “competing products” mentioned in the contract would naturally have to mean other spreadsheets, business-graphic applications, or databases — not full-fledged operating environments. Further, he pointed out, the restrictions only applied until January 1, 1984, or the first shipment of the Macintosh, whichever came first. By the time the Interface Manager was actually ready to sell, it would all be a moot point anyway.

It was at about this time that the Interface Manager became suddenly no longer the Interface Manager. The almost aggressively generic name of “Windows” was the brainchild of a new marketing manager named Rowland Hanson, who was just 31 years old when he came to Microsoft but had already left his stamp on such brands as Betty Crocker, Contadina, and Neutrogena. At his first interview with Bill Gates, the latter’s words immediately impressed him:

You know, the only difference between a dollar-an-ounce moisturizer and a forty-dollar-an-ounce moisturizer is in the consumer’s mind. There is no technical difference between moisturizers. We will technically be the best software. But if people don’t believe it or people don’t recognize it, it won’t matter. While we’re on the leading edge of technology, we also have to be creating the right perception about our products and our company, the right image.

Who would have thought that this schlubby-looking nerd understood so much about marketing? Having taken the interview on a lark, Hanson walked out of Gates’s office ready to help him create a new, slicker image for Microsoft. He knew nothing whatsoever about computers, but that didn’t matter. He hadn’t known anything about moisturizers either when he went to work for Neutrogena.

Hanson devised the approach to product branding that persists at Microsoft to this day. Each product’s name would be stripped down to its essence, creating the impression that it was the definitive — or the only — product of its type. The only ornamentation would be the Microsoft name, to make sure no one forgot who made it. Thus Multi-Tool Word, after just a few months on the market under that unwieldy name, now became simply Microsoft Word. If he had arrived just a little earlier, Hanson grumbled, he would have been able to make sure that Multiplan shipped as Microsoft Spreadsheet, and MS-DOS — the software that “tells the IBM PC how to think” in his new marketing line — would have had the first part of the abbreviation spelled out every single time: Microsoft DOS. Luckily, there was still time to save the next generation of Microsoft system software from the horrid name of Interface Manager. It should rather be known simply as Microsoft Windows. “It appeared there were going to be multiple systems like this on the market,” remembers Hanson. “Well, we wanted our name basically to define the generic.” Gates agreed, and one of the most enduring brands in the history of computing was born.

The Windows project had run hot and cold inside Microsoft over the past couple of years in the face of other pressing priorities. Now, though, Gates started pushing hard under the prompting of external developments. The Macintosh was scheduled to make its debut in January of 1984. Just as worryingly, VisiCorp planned to ship Visi On at last before 1983 was up, and had scheduled a big, much-anticipated unveiling of the final product for the Comdex business-computing trade show which would begin on November 23. Determined to avoid the impression that Microsoft was being left behind by the GUI arms race, and even more determined to steal VisiCorp’s thunder, Gates wanted a Windows unveiling before Comdex. To help accomplish that, he hired another refugee from Xerox named Scott MacGregor and put him in charge of the project’s technical architecture. At 26 years old, MacGregor was a little too young even by the early-blooming standards of hacker culture to have made a major contribution during the glory days of Xerox PARC, but he had done the next best thing: he had designed the windowing system for the Star office workstation, the only tangible commercial product Xerox themselves ever developed out of all the work done with mice and menus at PARC. Other Xerox veterans would soon join MacGregor on the Windows project, which spent the late summer and early autumn of 1983 in a mad scramble to upstage its various competitors.

On November 10, at a lavish event inside New York City’s posh Helmsley Palace Hotel, Microsoft officially announced Windows, saying it would be available for purchase by April of 1984 and that it would run on a computer without a hard drive and with as little as 192 K of memory — a stark contrast to Visi On’s minimum specification of a hard-drive-equipped 512 K machine. And, unlike under Visi On, all applications, even those not specifically written for Windows, would be able to run in the environment, at least after a fashion. “Misbehaved” programs, as Microsoft referred to what was actually the entirety of the MS-DOS application market at the time of the unveiling, could be started through Windows but would run in full-screen mode and not have access to its features; Windows would effectively shut down when the time came to run such an application, then start itself back up when the user exited. It wasn’t ideal, but it struck most people as an improvement on Visi On’s our-way-or-the-highway approach.

The dirty little secret hiding behind this very first demonstration of Windows was that the only actual Windows application that existed at the time was a little paint program Microsoft’s programmers had put together, along with a few applets like a calendar, a calculator, and an extremely basic text editor. Microsoft had, they claimed, “commitments” from such big players as Lotus, Ashton-Tate, and Peachtree to port their vanilla MS-DOS applications to Windows, but the reality was that none of these took the form of much more than a vague promise and a handshake.

The work Bill Gates had been doing to line up support from the emerging community of clone makers was in plainer evidence. Microsoft could announce that no fewer than 23 of their current MS-DOS licensees had signed up to ship Windows on their machines as well, including names like Compaq, Data General, Hewlett-Packard, Radio Shack/Tandy, Wang, and Zenith. The only important licensee absent from the list was the biggest of them all, IBM — a fact which the business and technology press could hardly fail to notice. Yet the plan was, as Gates didn’t hesitate to declare, to have Windows on 90 percent of all MS-DOS machines by the end of 1984. Where did that leave IBM? Among the trailing 10 percent?

As it happened, Microsoft was still trying to get IBM onboard the Windows train. The day after the big rollout, Gates flew from New York to Boca Raton, Florida, where the division of IBM responsible for their microcomputers was located, and made another pitch. Perhaps he believed that the good press stemming from the previous day’s festivities, which was to be found in the business and technology sections of this day’s newspapers all over the country, would sway them. If so, he was disappointed. Once again, IBM was noncommittal in all senses of the adjective, alluding vaguely to a potential similar product of their own. Then, a few days after Gates left them, IBM announced that they would distribute Visi On through their dealer network. This move was several steps short of anointing it the only or the official GUI of the IBM PC, but it was nevertheless a blessing of a certain type, and far more than IBM had yet agreed to do for Windows. It was becoming abundantly clear that IBM was, at the very least, hedging their bets.

A week later, the Comdex show opened in Las Vegas, with the finished Visi On on public display for the first time. Just a few booths down from that spectacle, Microsoft, still determined to undermine Visi On’s debut, showed Windows as well. Indeed, Windows was everywhere at Comdex; “You couldn’t take a leak in Vegas without seeing a Windows sticker,” remembers one Microsoft executive. Yet the actual product behind all the hype was presented only in the most circumscribed way. Microsoft employees ran through a carefully scripted spiel inside the Windows booth, making sure the public got nowhere close to the controls of the half-finished (at best) piece of software.

Still, Microsoft had some clear advantages to point out when it came to Windows, and point them out they did. For one, there was the aforementioned ability to run — or at least to start — non-Windows applications within the environment. For another, true multitasking would be possible under Windows, claimed Microsoft, not just the concurrently open applications of Visi On. And it would be possible, they said, to write Windows programs on the selfsame Windows computer on which they would run, in contrast to the $20,000 minicomputer one had to buy to develop for Visi On. This led Microsoft to refer to Windows as the open GUI, a system carrying forward the original promise of the personal computer as an anything tool for ordinary users.

In the nuts and bolts of their interfaces as well, the two systems presented contrasting approaches. The Visi On interface strongly resembled something that might have been seen at Xerox PARC in the 1970s, but Windows betrayed the undeniable influence of Apple’s recent work on the Lisa and, as would later become clear, the Macintosh — not hugely surprising, given that Microsoft had been able to follow the step-by-step evolution of the latter since January of 1982, thanks to their privileged position as contracted application developers for the new machine. Windows already seemed to operate a bit more intuitively than the rather awkward Visi On; Microsoft already understood, as their competitor so plainly did not, that a mouse could be used for things other than single clicks.

In other ways, though, Windows was less impressive than Visi On, not to mention the Lisa and Macintosh. And one of these ways was, ironically given the new product’s name, the windows themselves. They weren’t allowed to overlap one another — at all. In what Microsoft spun as the “automatic window layout” feature, sizing one window would cause all of the others to resize and reposition themselves in response. Nor could you freely drag windows around the screen like you could on the Lisa and Macintosh. “It’s the metaphor of the neat desktop,” said Steve Ballmer, spinning like mad. Neat or not, this wasn’t quite the way most people expected a window manager to work — and yet Microsoft would stick with it for a well-nigh absurdly long time to come.

A Quick Tour of Windows as Shown at the 1983 Comdex Show


None other than Dan Bricklin of VisiCalc fame visited the November 1983 Comdex show with a camcorder. The footage he took is a precious historical document, not least in showing Windows in action as it existed at the time of these first public demonstrations. Much must still be surmised thanks to the shaky camerawork and the fact that the public was kept at arm’s length from a far-from-complete piece of software, but we’re very lucky Bricklin and his camcorder were there that day. We learn from his footage that Windows had progressed hugely since the screenshot shown earlier in this article, showing the clear influence of Apple’s Lisa and Macintosh interfaces.

Windows apparently boots up to a blank screen with a row of (non-draggable) icons at the bottom, each representing an installed application.

Here a text editor, a clock applet, and a paint program have been opened. Unlike in Visi On and Apple’s GUIs, windows cannot overlap one another. On the other hand, note that the menu bar has been moved to the top of the window, where we expect it to be today. On the other other hand, it appears that the menu still provides single-click options only, not drop-down lists of choices. Note how cluttered the two-line text-editor menu at the top is.

At the bottom of each window (just to the left of the mouse pointer in the photograph) is a set of widgets. From left, these are: minimize the window; maximize the window (minimizing all of the others in the process, since windows are not allowed to overlap one another); automatically “tile” the window with the others that are open (it’s not entirely clear how this worked); initiate a resize operation; close the window. Despite the appearance of a resizing widget in this odd location, it does appear from other video evidence that it was already possible to size a window by dragging on its border. Whether one first had to click the resizing widget to initiate such an operation is, once again, unclear.

A scroll bar is in place, but it’s at the left rather than the right side of the window.

A few weeks after Comdex closed up shop, VisiCorp shipped Visi On, to cautiously positive press notices behind which lurked all of the concerns that would prove the product’s undoing: its high price; its high system requirements and slow performance even on a hot-rod machine; its lack of compatibility with vanilla MS-DOS applications; the huge hardware hurdle developers had to leap to make applications for the system. Bill Gates, in other words, needn’t worry himself overmuch on that front.

But a month after Visi On made its underwhelming debut, the Apple Macintosh made its overwhelming version of same in the form of that famous “1984” television advertisement, which aired to an audience of 96 million viewers during the third quarter of the Super Bowl. Two days later, when the new computer was introduced in a slightly more orderly way at De Anza College’s Flint Auditorium, Bill Gates was there to support his sometime friend, sometime rival Steve Jobs in the biggest moment of his busy life to date. Versions of Microsoft Multiplan and BASIC for the Macintosh, Gates could announce there, would be available from the day the new computer shipped.

The announcement of the Mac version of Microsoft BASIC at the ceremony marked one of the last gasps of the old Microsoft business model which dated back to the days of the Altair kit computer, when they would supply a BASIC as a matter of course for every new microcomputer to come down the pipe.1 But more important than the BASIC or even the Mac Multiplan was the mere fact that Microsoft was there at all in Flint Auditorium, getting their piece of the action. Bill Gates was doing what he always did, seeking to control those parts of the industry which he could and exploit those parts which he couldn’t. He didn’t know whether the Macintosh was destined to take over business computing entirely, as some were claiming, or whether its flaws, all too easily overlooked under the auditorium’s bright lights, would undermine its prospects in the end. Certainly those flaws were legion when you dug below the surface, including but not limited to a price which was, if vastly less than that of the Lisa, still far more than a typical MS-DOS machine; the lack of a hard drive; the straitened memory of just 128 K; the lack of amenability to expansion, which only exacerbated the previous three flaws; the lack of multitasking or even the ability to open concurrent programs; and an interface which corporate America might read as too friendly, crossing past the friend zone into cutesy and unbusinesslike. But what Bill Gates did know, with every bit as much certainty as Steve Jobs, was that the GUI in the abstract was the future of computing.

In June of 1984, with Windows having missed its release target of two months previous but still hopefully listed in Microsoft’s catalog as “coming soon,” Gates and Steve Ballmer wrote an internal memo which described in explicit, unvarnished detail their future strategy of playing the Macintosh and Windows off against one another:

Microsoft believes in the mouse and graphics as invaluable to the man-machine interface. We will bet on that belief by focusing new development on the two new environments with the mouse and graphics: Macintosh and Windows.

This also makes sense from a marketing perspective. Our focus will be on the business user, a customer who can afford the extra hardware expense of a mouse and high-resolution screen, and who will pay premium prices for quality easy-to-use software.

Microsoft will not invest significant development resources in new Apple II, MSX, CP/M-80, or character-based IBM PC applications. We will finish development and do a few enhancements to existing products.

Over the foreseeable future, our plan is to implement products first for the Mac and then to port them to Windows. We are taking care in the design of the Windows user interface to make this as easy as possible.

In his more unguarded moments, Gates would refer to Windows as “Mac on the [IBM] PC.”

Just one worrisome unknown continued to nag at him: what role would IBM play in his GUI-driven world of the future?

(Sources: the books The Making of Microsoft: How Bill Gates and His Team Created the World’s Most Successful Software Company by Daniel Ichbiah and Susan L. Knepper, Hard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace and Jim Erickson, Gates: How Microsoft’s Mogul Reinvented an Industry and Made Himself the Richest Man in America by Stephen Manes and Paul Andrews, and Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company by Owen W. Linzmayer; PC World of September 1983; InfoWorld of May 30 1983, November 21 1983, April 2 1984, October 21 1991, and November 20 1995; MacWorld of September 1991; Byte of December 1983. Finally, I owe a lot to Nathan Lineback for the histories, insights, comparisons, and images found at his wonderful online “GUI Gallery.”)


  1. That said, it wasn’t quite the last gasp: Microsoft would also supply a BASIC for the Commodore Amiga, constituting the only piece of software they would ever develop for that machine, shortly after its release in 1985. 

June 28, 2018

Renga in Blue

Zork I: Getting Out

by Jason Dyer at June 28, 2018 10:40 PM

I’m at the point where I have a mostly complete map, so I’m in the “logistics” phase where I am planning what order to visit the rooms and gather treasures.

It’s fairly tricky because of the thief. Just like original Zork, there’s a “lean and hungry gentleman” who can go most places on the map and will try to take any treasures you have (or ones that are lying around), and possibly stab you with his stiletto while he’s at it. You can eventually fight the thief with your sword, but it takes a higher point score to manage (like “experience points” in Dungeons and Dragons I suppose) so the fight has to wait until later in the game. Here is a result of trying to start the fight early:

Someone carrying a large bag is casually leaning against one of the walls here.
He does not speak, but it is clear from his aspect that the bag will be taken only over his dead body.
Your sword has begun to glow very brightly.

> kill thief with sword
A good slash, but it misses the thief by a mile.
The thief draws blood, raking his stiletto across your arm.

> kill thief with sword
You charge, but the thief jumps nimbly aside.
The thief just left, still carrying his large bag. You may not have noticed that he appropriated the valuables in the room.
Your sword is no longer glowing.

The other tricky element is perhaps the most clever finesse in the game, and one of the very first thing that happens.

> move rug
With a great effort, the rug is moved to one side of the room, revealing the dusty cover of a closed trap door.

> open trap door
The door reluctantly opens to reveal a rickety staircase descending into darkness.

> d
You have moved into a dark place.
The trap door crashes shut, and you hear someone barring it.

It is pitch black. You are likely to be eaten by a grue.
Your sword is glowing with a faint blue glow.

I’m not referring to the iconic “you are likely to be eaten by a grue” (which is indeed lovely) or “your sword is glowing with a faint blue glow” (which always happens with enemies nearby, and gives a nice texture to the world-building even if it is cadged from Tolkien).

Iconic enough someone made a cupcake. From Steelhead Studio.

I’m referring to the fact the way you came in is locked behind you, so you cannot take the same way out.

First: who barred the door? One might think the thief, but at least in this version of Zork, he never gets into the upstairs house. It can’t have happened on its own (“you hear someone barring it”), which is what I imagined when I was a child. Spoiler theory in rot13 (from a later Zork, so don’t reveal if you only know this game): gur qhatrba znfgre sebz gur irel raq bs gur gevybtl oneerq gur qbbe, gb sbepr gur cynlre punenpgre gb rkcyber engure guna whfg eha njnl.

Second is simply the design finesse of forcing the player to look for another exit. And there are plenty, including one a couple steps away: a chimney which is too narrow to carry much of anything, including a large treasure that is in the same room.

I overall count four distinct methods (not including the fact the trapdoor eventually will stay open), which really gives the feel of player choice. There are enough routes and it is non-obvious what the most efficient one is (I’m guessing every walkthrough of this game is very different).

One last catch is that while most of the edits from original mainframe Zork seem to be simply rooms removed (along with exits that don’t exist any more) there is one section that is changed enough I’m not sure what to do.

> e
Dome Room
You are at the periphery of a large dome, which forms the ceiling of another room below. Protecting you from a precipitous drop is a wooden railing which circles the dome.

> tie rope to railing
The rope drops over the side and comes within ten feet of the floor.

> d
Torch Room
This is a large room with a prominent doorway leading to a down staircase. Above you is a large dome. Up around the edge of the dome (20 feet up) is a wooden railing. In the center of the room sits a white marble pedestal.
A piece of rope descends from the railing above, ending some five feet above your head.
Sitting on the pedestal is a flaming torch, made of ivory.

The rope is too high to climb back up, and there doesn’t seem to be any normal exits.

There’s a few other locations with the torch room, but otherwise this seems to be a dead end (note the rope is too high to climb back up). There’s a granite wall that I recall should let me teleport with just >TOUCH GRANITE WALL, but it doesn’t work. My theory is I need to defeat the thief first, because the other end is in the thief’s lair, but it’s possible there’s another angle altogether I haven’t thought of.

This is unfortunate because the torch is the “unlimited turns” light source of the game; the lantern will eventually run out of battery and go dark. I’m confident there was a lot of intention here on the part of the authors; they probably felt like being able to walk anywhere with an unlimited light source too early would undercut the tension they did a good part building by barring the initial way out. (I remember my childhood self having particular dread of the dark in this game, especially the time my lantern winked out and all I was left with was a book of matches.)

I’m otherwise in the clear on all the rest of the puzzles, so it’s possible I’ll have won this by next time I report in.

The People's Republic of IF

July meetup

by zarf at June 28, 2018 06:41 PM

The Boston IF meetup for July will be Thursday, July 19, 6:30 pm, MIT room 14N-233.